Compare commits
719 Commits
2023.0.0.d
...
2023.0.0.d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4ade0e5533 | ||
|
|
06cacfe2a7 | ||
|
|
132b657977 | ||
|
|
6d82f36050 | ||
|
|
3be946371d | ||
|
|
07437eec1e | ||
|
|
51967fd27b | ||
|
|
df4d7bd3e9 | ||
|
|
e17a6f29bf | ||
|
|
24ab3f7c41 | ||
|
|
5e2f424fd0 | ||
|
|
c7c7c4bb05 | ||
|
|
4812879318 | ||
|
|
d2deae225a | ||
|
|
5ccc743707 | ||
|
|
5f416dc4d2 | ||
|
|
906ec7ee1b | ||
|
|
92eb62fe63 | ||
|
|
d732024ccb | ||
|
|
cb436112b2 | ||
|
|
cafc7359c5 | ||
|
|
dbe051aa79 | ||
|
|
7a5c472ccc | ||
|
|
932a668a2f | ||
|
|
bc0c8374da | ||
|
|
53d9b26e1f | ||
|
|
b2a64e8c3a | ||
|
|
9f0e557744 | ||
|
|
70ef0b5316 | ||
|
|
f2894d09e9 | ||
|
|
38c8a3d15b | ||
|
|
362389c733 | ||
|
|
f95fd27c16 | ||
|
|
1c564226f3 | ||
|
|
6e97c82c97 | ||
|
|
2e0bac34db | ||
|
|
ff8f361778 | ||
|
|
8c2766c4bc | ||
|
|
7442a17240 | ||
|
|
fef04e468a | ||
|
|
44cfbea9ab | ||
|
|
f5e199c494 | ||
|
|
4098434233 | ||
|
|
837f5a7d53 | ||
|
|
73ab0dd065 | ||
|
|
f2d4c96032 | ||
|
|
c474f564a9 | ||
|
|
f9bd2d2c1e | ||
|
|
45daa2095f | ||
|
|
18c876bf23 | ||
|
|
093990118d | ||
|
|
c034975183 | ||
|
|
d4b394c1b6 | ||
|
|
1ee0c151ea | ||
|
|
9f54504232 | ||
|
|
8691ec2779 | ||
|
|
f4fe856d9d | ||
|
|
90615cf26a | ||
|
|
4f7f7c31ee | ||
|
|
06e6a69356 | ||
|
|
f6c7213ae4 | ||
|
|
0a56927671 | ||
|
|
dec425c408 | ||
|
|
03ab0e4388 | ||
|
|
6237868437 | ||
|
|
f7d15e12c8 | ||
|
|
b7b788917d | ||
|
|
86da15e621 | ||
|
|
e7c1cdf982 | ||
|
|
016d36f032 | ||
|
|
44330b22bd | ||
|
|
f4fca2d578 | ||
|
|
b2e4857a64 | ||
|
|
02b35d7984 | ||
|
|
3a5b819685 | ||
|
|
2f5be5e81c | ||
|
|
848c9e3b76 | ||
|
|
cddbb667a5 | ||
|
|
950b46ecad | ||
|
|
bb20151c9d | ||
|
|
f5dced8e69 | ||
|
|
8491f15ba7 | ||
|
|
b64cbff10b | ||
|
|
d7f70b647b | ||
|
|
99eda5b5e1 | ||
|
|
e978db3132 | ||
|
|
186a1ccdcd | ||
|
|
66ea57addd | ||
|
|
341217de99 | ||
|
|
9a5a8f6abc | ||
|
|
c33a3f87f0 | ||
|
|
6e09e53f0d | ||
|
|
6d1e5d336d | ||
|
|
f9ff518d16 | ||
|
|
bb93bfd90f | ||
|
|
fc88bed604 | ||
|
|
35398e339d | ||
|
|
6d064d26cb | ||
|
|
ee0bb79ed6 | ||
|
|
43fca3d231 | ||
|
|
1b9bd61767 | ||
|
|
73e75c58ba | ||
|
|
385bbbd49b | ||
|
|
8fad140a02 | ||
|
|
9cf4ee1eae | ||
|
|
bf8e5cb4a2 | ||
|
|
fc95d8e544 | ||
|
|
e94f7b25c0 | ||
|
|
5e149aa0dd | ||
|
|
ab96cc939b | ||
|
|
b3503c8b7a | ||
|
|
961a99586a | ||
|
|
392b67f082 | ||
|
|
7983e00b00 | ||
|
|
87365fa21d | ||
|
|
086ee93bcd | ||
|
|
ccf9c19f61 | ||
|
|
5b203efb9c | ||
|
|
5eea99d96c | ||
|
|
3573a38e0b | ||
|
|
712d1b99d1 | ||
|
|
b0e6b1e83c | ||
|
|
2a01695370 | ||
|
|
0250f62d11 | ||
|
|
7d8f4af78a | ||
|
|
10668f4f3a | ||
|
|
8d59252966 | ||
|
|
a9360f8045 | ||
|
|
0c2308506f | ||
|
|
f7e898893d | ||
|
|
591c3e61c5 | ||
|
|
988a8dd6a9 | ||
|
|
f3dcf93f96 | ||
|
|
df3c06ecb4 | ||
|
|
f4da729a19 | ||
|
|
75c62ea320 | ||
|
|
05ab0f32d7 | ||
|
|
556d469f6b | ||
|
|
35e03d33bb | ||
|
|
7a95830d24 | ||
|
|
ea6e3481cd | ||
|
|
966c47e7cd | ||
|
|
f7891aa034 | ||
|
|
daf562832f | ||
|
|
6c766a81b5 | ||
|
|
b82bedd648 | ||
|
|
79b267033c | ||
|
|
40cc006bae | ||
|
|
796bd98913 | ||
|
|
8d90c11a35 | ||
|
|
4403433309 | ||
|
|
e169c7cd38 | ||
|
|
3849d5aa02 | ||
|
|
a2218ab169 | ||
|
|
38c924a3ae | ||
|
|
253e4eb366 | ||
|
|
17c3e67336 | ||
|
|
44f0419a0b | ||
|
|
05b0c58521 | ||
|
|
55e9cae54f | ||
|
|
6fc0b6479e | ||
|
|
49d150b3b8 | ||
|
|
d9d1df2fe3 | ||
|
|
a726f0ae38 | ||
|
|
906939a1f1 | ||
|
|
d06a22f4e4 | ||
|
|
5dff012233 | ||
|
|
167bf7e16a | ||
|
|
815d4abc03 | ||
|
|
aa0df8e535 | ||
|
|
68e067062f | ||
|
|
1ca94326cb | ||
|
|
5c5a29d095 | ||
|
|
6e99b48ecc | ||
|
|
9863b32792 | ||
|
|
7ccf1c89cf | ||
|
|
5e9ea6a146 | ||
|
|
5113a5538c | ||
|
|
4936d4bb1d | ||
|
|
5e835e327b | ||
|
|
6b70c449ba | ||
|
|
7d16ee1835 | ||
|
|
bb9de29062 | ||
|
|
a1b8a6a941 | ||
|
|
64e9dc32cd | ||
|
|
2638014d00 | ||
|
|
0765fa108a | ||
|
|
3f3bda592b | ||
|
|
ce67ac09d3 | ||
|
|
ab151fd357 | ||
|
|
1df14c6a6c | ||
|
|
6eb8f4b2b7 | ||
|
|
43ef89e625 | ||
|
|
2956717118 | ||
|
|
60ab7490bf | ||
|
|
e66b837104 | ||
|
|
a96da994ec | ||
|
|
580b99c99b | ||
|
|
6a25143045 | ||
|
|
1ef94ec069 | ||
|
|
18df64c135 | ||
|
|
81b4666632 | ||
|
|
179403ddc9 | ||
|
|
953a166a62 | ||
|
|
5a8a195dad | ||
|
|
c5b348dd4f | ||
|
|
9eab122952 | ||
|
|
077d0e43f2 | ||
|
|
cabb917b1f | ||
|
|
86c4489aca | ||
|
|
65e5ed7dd7 | ||
|
|
16933efc06 | ||
|
|
613b66ba35 | ||
|
|
3f4b1e8205 | ||
|
|
fbdd158615 | ||
|
|
025115f695 | ||
|
|
69cec4a5e2 | ||
|
|
8518a3a8e8 | ||
|
|
e434c320f5 | ||
|
|
7601e8a874 | ||
|
|
1b89ecdbae | ||
|
|
abaf61d059 | ||
|
|
52b27d82c5 | ||
|
|
74870f9b0b | ||
|
|
2755b32fb9 | ||
|
|
de0a4e16fb | ||
|
|
44d6d97871 | ||
|
|
fb24e91416 | ||
|
|
8a246a8bf2 | ||
|
|
3b8d9c568c | ||
|
|
448654ea65 | ||
|
|
c89da1aee2 | ||
|
|
9d0749a5b7 | ||
|
|
a004601774 | ||
|
|
a3958d6ddf | ||
|
|
982e1c1192 | ||
|
|
087b10ff00 | ||
|
|
5fa95ff19d | ||
|
|
66ae71454a | ||
|
|
aaa4a4c210 | ||
|
|
17174a3839 | ||
|
|
a20b3631fb | ||
|
|
a205c675db | ||
|
|
6bf2fe11ae | ||
|
|
951c5fdae9 | ||
|
|
5290822f8b | ||
|
|
6ac5e42b62 | ||
|
|
8eb142ca6e | ||
|
|
c23a1170ba | ||
|
|
4561aa7109 | ||
|
|
8509d0dd82 | ||
|
|
1b72352f6f | ||
|
|
57c91e0c56 | ||
|
|
90100451a3 | ||
|
|
066ef694f5 | ||
|
|
2f69305aa3 | ||
|
|
14e70e76fb | ||
|
|
232c802e07 | ||
|
|
cbb25e9483 | ||
|
|
c14e6ef48e | ||
|
|
0070e8d939 | ||
|
|
f1c3356cfc | ||
|
|
04a2c4ce61 | ||
|
|
6cfea099d8 | ||
|
|
a71c83d366 | ||
|
|
a204b04fae | ||
|
|
95636f7715 | ||
|
|
5e98696464 | ||
|
|
d86d94edad | ||
|
|
b70e56d110 | ||
|
|
234f36e9b7 | ||
|
|
d8e7b39edb | ||
|
|
85d9c11b97 | ||
|
|
0893efe073 | ||
|
|
d402b6ed3e | ||
|
|
24ff43aa5b | ||
|
|
8926282ac5 | ||
|
|
05e54e9f3d | ||
|
|
5d6cd626bc | ||
|
|
5af4a8e8d6 | ||
|
|
ec0a1e58d1 | ||
|
|
7d56c75d65 | ||
|
|
63797db257 | ||
|
|
82a992b95d | ||
|
|
60436dee5a | ||
|
|
5cb20f8858 | ||
|
|
98237b06b5 | ||
|
|
7f8786d9aa | ||
|
|
4ffecce63f | ||
|
|
d9c70dbce3 | ||
|
|
9c69e2f694 | ||
|
|
73bedced87 | ||
|
|
1e512af105 | ||
|
|
1c7b6a7b2a | ||
|
|
71167df234 | ||
|
|
86f0285db2 | ||
|
|
ecc2f13dd2 | ||
|
|
0c99135d44 | ||
|
|
8b31e3aafe | ||
|
|
c5f65eea73 | ||
|
|
083596e285 | ||
|
|
0f4c96e96e | ||
|
|
76e60ff258 | ||
|
|
350f8fd95b | ||
|
|
4a4f06ba3b | ||
|
|
997414c64d | ||
|
|
f39684a7f8 | ||
|
|
4bf5a77ac9 | ||
|
|
134ebb8889 | ||
|
|
c472b020b7 | ||
|
|
4411a6ea45 | ||
|
|
a46fc47e6a | ||
|
|
3f06a9b6fb | ||
|
|
bd62da9ffe | ||
|
|
7bce07a46e | ||
|
|
8bfb6afd6a | ||
|
|
a001f84cba | ||
|
|
2739a01d64 | ||
|
|
bc15596c9e | ||
|
|
afa61ed3ec | ||
|
|
4f49d0e07e | ||
|
|
9c7f7b8338 | ||
|
|
b2a2266f60 | ||
|
|
e6ceed0bb9 | ||
|
|
6889101415 | ||
|
|
d189df169c | ||
|
|
a754473689 | ||
|
|
a99a5057e2 | ||
|
|
249d57f37e | ||
|
|
d7c88fd694 | ||
|
|
e1e44d6bac | ||
|
|
a9bd5f741d | ||
|
|
bb59672639 | ||
|
|
c5ccb3e954 | ||
|
|
8d1139b61a | ||
|
|
cb241a8e4a | ||
|
|
0ee8c966b2 | ||
|
|
e4500c7d61 | ||
|
|
6ffa8da922 | ||
|
|
6762fe692d | ||
|
|
8a7956e3cb | ||
|
|
91b9675bed | ||
|
|
9229b4967e | ||
|
|
a72b9bac2f | ||
|
|
0372ca929a | ||
|
|
c18f3824b0 | ||
|
|
461cc2aee8 | ||
|
|
790f74c01c | ||
|
|
fbc420093d | ||
|
|
2194552dc5 | ||
|
|
2f3ae4518e | ||
|
|
05866f05ea | ||
|
|
0e1df68263 | ||
|
|
072acc1ea7 | ||
|
|
8189d18648 | ||
|
|
0d5b5b187d | ||
|
|
6ff02f5e25 | ||
|
|
e1ee8f0ec8 | ||
|
|
28d3e1087e | ||
|
|
d59d8ba3a2 | ||
|
|
523c587d29 | ||
|
|
bd1b00d654 | ||
|
|
0f9583c3cf | ||
|
|
fdc2664b24 | ||
|
|
f0c153858b | ||
|
|
69ba802e03 | ||
|
|
76f29f8532 | ||
|
|
bdf1923972 | ||
|
|
ab684036f4 | ||
|
|
a3d53c0415 | ||
|
|
85f80f2a03 | ||
|
|
d774cc65a9 | ||
|
|
36c18e29a8 | ||
|
|
4b7b3fb0ae | ||
|
|
e4f44b19fd | ||
|
|
e44fd03d2a | ||
|
|
4b7d1d9f50 | ||
|
|
e348481849 | ||
|
|
8477bc8897 | ||
|
|
7578c636b9 | ||
|
|
95faa573ed | ||
|
|
596036a2db | ||
|
|
0ca3ccb7fb | ||
|
|
0145e538f5 | ||
|
|
497b7885da | ||
|
|
a8be566e24 | ||
|
|
164db3def9 | ||
|
|
1268bfdca2 | ||
|
|
3a96e06d4c | ||
|
|
ae37ca671c | ||
|
|
dcc8a36d88 | ||
|
|
3b71286f1d | ||
|
|
f0f1c47063 | ||
|
|
9462b3ea16 | ||
|
|
ca6ad433e4 | ||
|
|
fbc9516662 | ||
|
|
5311ab0938 | ||
|
|
625890c666 | ||
|
|
f080a0d9cf | ||
|
|
a84f87e9dc | ||
|
|
bf8c7fe668 | ||
|
|
72566cde0d | ||
|
|
63338b6e08 | ||
|
|
e19ba8b3e2 | ||
|
|
0ffa4eb507 | ||
|
|
df6cd3303a | ||
|
|
8da01d4c2d | ||
|
|
2eef025773 | ||
|
|
1e757de195 | ||
|
|
087c3bd0af | ||
|
|
e8b108ac6b | ||
|
|
0e91b07422 | ||
|
|
32ac952e5f | ||
|
|
1874c072b2 | ||
|
|
a1a35e9211 | ||
|
|
8446f38924 | ||
|
|
75314c2c53 | ||
|
|
4e89150a7c | ||
|
|
63041ca559 | ||
|
|
4d8a4d3957 | ||
|
|
5e406a80d3 | ||
|
|
da8d5ba056 | ||
|
|
fa2ffc3bb4 | ||
|
|
b8e1dea345 | ||
|
|
198e90944f | ||
|
|
7a0beb5f1d | ||
|
|
3c8bb1492e | ||
|
|
fa9677a6ee | ||
|
|
34bab897d6 | ||
|
|
670668e593 | ||
|
|
0f19e9c0d2 | ||
|
|
45bdbf7486 | ||
|
|
ec8a4abf6d | ||
|
|
a1510a5e5f | ||
|
|
8e24483f5c | ||
|
|
da9b014c83 | ||
|
|
6586742204 | ||
|
|
d5e98cbdce | ||
|
|
3ec386a741 | ||
|
|
dff7f2451b | ||
|
|
31489931cf | ||
|
|
654f3d988f | ||
|
|
326aedb5f8 | ||
|
|
1051226fc9 | ||
|
|
a9cc52b462 | ||
|
|
e43f606750 | ||
|
|
f04507f56c | ||
|
|
94b533b284 | ||
|
|
474ea8a8e2 | ||
|
|
5040f59c96 | ||
|
|
0365ebf5ad | ||
|
|
cc8295f27e | ||
|
|
c7e479ff78 | ||
|
|
aaeace9740 | ||
|
|
b7ff3a1d64 | ||
|
|
3d52fc843a | ||
|
|
75c73a38f8 | ||
|
|
c3b22af3f7 | ||
|
|
f3e7e55968 | ||
|
|
3dbea43ef1 | ||
|
|
75b48f2153 | ||
|
|
e5ef0fee8e | ||
|
|
50b76873e2 | ||
|
|
0786a963ab | ||
|
|
6514b7600a | ||
|
|
cd8999d43b | ||
|
|
7c8dc76223 | ||
|
|
68b8d41c43 | ||
|
|
a9cbccd829 | ||
|
|
681faadce3 | ||
|
|
b907bfab3b | ||
|
|
1bd1eca8d9 | ||
|
|
feb448cc89 | ||
|
|
6358974c1a | ||
|
|
e79636bfbb | ||
|
|
cf7dfff35f | ||
|
|
82584543ba | ||
|
|
e6ad0a5154 | ||
|
|
15e43e0cc2 | ||
|
|
a1eb76ad06 | ||
|
|
6fbaf4745a | ||
|
|
4cea80915d | ||
|
|
0dad7749b5 | ||
|
|
87b18a21c1 | ||
|
|
eff0bce7e3 | ||
|
|
e77adca01f | ||
|
|
4d3dcfc5d4 | ||
|
|
4d7bffa593 | ||
|
|
83b57e2a64 | ||
|
|
6e7bef529f | ||
|
|
5269cb37d8 | ||
|
|
7123e8879e | ||
|
|
41fd836196 | ||
|
|
3faf4fcb3e | ||
|
|
cf8dccaedb | ||
|
|
b8348cda2e | ||
|
|
4ce35fd851 | ||
|
|
54d3641baa | ||
|
|
e6a65f406d | ||
|
|
20c0927ff9 | ||
|
|
b9a48f12c8 | ||
|
|
8b66b35bf7 | ||
|
|
4486470e02 | ||
|
|
9b97235902 | ||
|
|
79d3ff352e | ||
|
|
e605a4c344 | ||
|
|
0860db0dc3 | ||
|
|
e1fbb7d768 | ||
|
|
9c4c559909 | ||
|
|
32b177e9eb | ||
|
|
a1cde2e790 | ||
|
|
0edbc5ca60 | ||
|
|
6a39d466a4 | ||
|
|
de50251ceb | ||
|
|
b130b73f80 | ||
|
|
190b64a0af | ||
|
|
3b6bb06f1d | ||
|
|
35dacb370a | ||
|
|
4e8590bf9b | ||
|
|
7deb9090bf | ||
|
|
84a42cde61 | ||
|
|
9efdb38b96 | ||
|
|
fc98454174 | ||
|
|
33e0b8caeb | ||
|
|
a16f1923d7 | ||
|
|
6979c06ca1 | ||
|
|
392e0fda34 | ||
|
|
c98591f8a8 | ||
|
|
4d925e0a3d | ||
|
|
0a31bdc112 | ||
|
|
df03f8bfce | ||
|
|
8cbd8b8e03 | ||
|
|
0e9b133de5 | ||
|
|
cb7eeadd62 | ||
|
|
226bc301dc | ||
|
|
57cf23857a | ||
|
|
7be7f25566 | ||
|
|
6185114bc4 | ||
|
|
18763f66ac | ||
|
|
65a45a6232 | ||
|
|
0d798b7431 | ||
|
|
24b0baa0d1 | ||
|
|
27ac7d9092 | ||
|
|
99a1800901 | ||
|
|
c09b2ff8b1 | ||
|
|
5422242e86 | ||
|
|
3c67509fc8 | ||
|
|
2e12af27e4 | ||
|
|
8307019380 | ||
|
|
6ae024f86e | ||
|
|
4c0d28f26d | ||
|
|
ba19d945ac | ||
|
|
c5c7e4ff65 | ||
|
|
bde65c25c4 | ||
|
|
84285ac317 | ||
|
|
3de00347f3 | ||
|
|
f0e12cf38b | ||
|
|
113aefa3ff | ||
|
|
21ac61fef5 | ||
|
|
b1d0e152e3 | ||
|
|
112c763256 | ||
|
|
051a1c661e | ||
|
|
7f83c2c72d | ||
|
|
a7bb54da2d | ||
|
|
63d282fd73 | ||
|
|
51a3a02115 | ||
|
|
e534efd4a8 | ||
|
|
87e714eb5c | ||
|
|
62ff31df8a | ||
|
|
0988c2b813 | ||
|
|
07f287e362 | ||
|
|
f9a8d9132d | ||
|
|
4dff2d1c60 | ||
|
|
5e48941f53 | ||
|
|
f7ccfd9b6e | ||
|
|
7aaf966039 | ||
|
|
b748395f7d | ||
|
|
1070a3b6c1 | ||
|
|
913f616964 | ||
|
|
45dff75356 | ||
|
|
e5f2903c83 | ||
|
|
68b7b8e69b | ||
|
|
93a1be3607 | ||
|
|
d2a5be0ab8 | ||
|
|
2ced2ad929 | ||
|
|
b30b283f0d | ||
|
|
a7443e13fa | ||
|
|
713b37cb25 | ||
|
|
957ff6edd8 | ||
|
|
2d91c36d32 | ||
|
|
5526969eba | ||
|
|
5317b909f7 | ||
|
|
4fac7eabc0 | ||
|
|
39e63ace67 | ||
|
|
fabf67ee5e | ||
|
|
15990afea2 | ||
|
|
c0ef9a862e | ||
|
|
bd8c7506f0 | ||
|
|
9822568194 | ||
|
|
46e8aad4bb | ||
|
|
30939f5021 | ||
|
|
f13b1e9681 | ||
|
|
6d7b94b8cd | ||
|
|
6c0e2686ad | ||
|
|
57cb7015f0 | ||
|
|
8dd9ade211 | ||
|
|
5e6398a2d8 | ||
|
|
e21c71dd48 | ||
|
|
5c6ef54127 | ||
|
|
ba45c993ac | ||
|
|
ad4bd6f752 | ||
|
|
d60e3812ca | ||
|
|
a730ef18eb | ||
|
|
15935069ff | ||
|
|
091ba1f5ee | ||
|
|
91d1600646 | ||
|
|
36992c9c46 | ||
|
|
310dd1d4c4 | ||
|
|
bd3a392d84 | ||
|
|
28cfb988e7 | ||
|
|
7a465e2422 | ||
|
|
5ae859b4f8 | ||
|
|
c1c8d6320e | ||
|
|
2d960fc6c5 | ||
|
|
be5f90199d | ||
|
|
f562e96305 | ||
|
|
a4f0b340d0 | ||
|
|
f00fb325a6 | ||
|
|
8a56234445 | ||
|
|
bd61d317f8 | ||
|
|
b8de9beeac | ||
|
|
e8d1be6e0f | ||
|
|
6359926815 | ||
|
|
c72d148ec7 | ||
|
|
3cc888555a | ||
|
|
92181b0a4b | ||
|
|
3373c6743f | ||
|
|
d9fc5bac80 | ||
|
|
1e24c51abb | ||
|
|
bc663878eb | ||
|
|
288a750bc6 | ||
|
|
a9efe5bd8d | ||
|
|
900332c46e | ||
|
|
87bcbc1747 | ||
|
|
1028c7b5d5 | ||
|
|
c749163f72 | ||
|
|
1f196bacd3 | ||
|
|
ed65583957 | ||
|
|
6bd7def8c4 | ||
|
|
6f85ee7968 | ||
|
|
592ad68455 | ||
|
|
893f96a7da | ||
|
|
27ea9eab32 | ||
|
|
98392a043b | ||
|
|
eaf368a5f5 | ||
|
|
5251133202 | ||
|
|
877018bab6 | ||
|
|
548f972e19 | ||
|
|
c8643a9a30 | ||
|
|
f41c75b965 | ||
|
|
7f3ea9a59c | ||
|
|
e34d4e4664 | ||
|
|
4fd38844a2 | ||
|
|
a6ff809ad7 | ||
|
|
d464f38788 | ||
|
|
95c7c39b91 | ||
|
|
3644c26402 | ||
|
|
4746c04840 | ||
|
|
f730edb084 | ||
|
|
0ddca519d6 | ||
|
|
15973fd2da | ||
|
|
94b64fed79 | ||
|
|
86b50044cd | ||
|
|
8153664b52 | ||
|
|
f347968a1d | ||
|
|
7f3f576151 | ||
|
|
9e3b3e0566 | ||
|
|
a5ec5f5476 | ||
|
|
ce3ac296ae | ||
|
|
1b147c3560 | ||
|
|
cbd56c3ed9 | ||
|
|
5d3cd81fd1 | ||
|
|
c8c4503672 | ||
|
|
bc8d0ec71e | ||
|
|
1a070b225e | ||
|
|
b75a3b3465 | ||
|
|
310c4deab9 | ||
|
|
758ebe5242 | ||
|
|
699a1d1708 | ||
|
|
d9f0890a84 | ||
|
|
69728cb4ef | ||
|
|
7cffe848d6 | ||
|
|
749ff8c93f | ||
|
|
b7bcef6864 | ||
|
|
1d5839fb92 | ||
|
|
9cc8bc882b | ||
|
|
e9f060cce4 | ||
|
|
5f8f5b5eee | ||
|
|
ed5fa69b41 | ||
|
|
04f300e187 | ||
|
|
efb51b058c | ||
|
|
59542d5cd3 | ||
|
|
ed49d51ee1 | ||
|
|
91df0a8aa9 | ||
|
|
0f459c0455 | ||
|
|
7d13bc6861 | ||
|
|
eeed1f252a | ||
|
|
bd0dfbcd7a | ||
|
|
50090ed03a | ||
|
|
225f9b3801 | ||
|
|
f03a3321fc | ||
|
|
ff75fdadf3 | ||
|
|
c0fa45a1b9 | ||
|
|
a088d1ab7d | ||
|
|
672522492e | ||
|
|
f48a67acc9 | ||
|
|
bce8b7a04c | ||
|
|
12a706452c | ||
|
|
d89615ff9e |
@@ -118,13 +118,11 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
@@ -160,6 +158,8 @@ jobs:
|
||||
-DENABLE_TESTS=ON
|
||||
-DBUILD_java_api=ON
|
||||
-DBUILD_nvidia_plugin=OFF
|
||||
-DBUILD_custom_operations=OFF
|
||||
-DENABLE_INTEL_GPU=ON
|
||||
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
||||
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
|
||||
-DCMAKE_C_LINKER_LAUNCHER=ccache
|
||||
|
||||
@@ -1 +1 @@
|
||||
rel-1.8.1
|
||||
rel-1.14.0
|
||||
|
||||
@@ -71,7 +71,7 @@ jobs:
|
||||
maxParallel: '2'
|
||||
|
||||
# About 150% of total time
|
||||
timeoutInMinutes: '120'
|
||||
timeoutInMinutes: '180'
|
||||
|
||||
pool:
|
||||
name: LIN_VMSS_VENV_F16S_U20_WU2
|
||||
@@ -151,13 +151,11 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
@@ -165,7 +163,7 @@ jobs:
|
||||
set -e
|
||||
sudo -E $(REPO_DIR)/install_build_dependencies.sh
|
||||
# Move jdk into contrib
|
||||
# 'clang' compiler is to check that samples can be built using it
|
||||
# 'clang' compiler is used as a default compiler
|
||||
sudo apt --assume-yes install openjdk-11-jdk libbz2-dev clang
|
||||
# For Python API
|
||||
python3 -m pip install --upgrade pip
|
||||
@@ -219,7 +217,6 @@ jobs:
|
||||
# Should be after 'Install dependencies' because Git lfs is not installed
|
||||
- checkout: testdata
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
lfs: 'true'
|
||||
path: testdata
|
||||
|
||||
@@ -239,10 +236,14 @@ jobs:
|
||||
-DENABLE_FASTER_BUILD=ON
|
||||
-DENABLE_STRICT_DEPENDENCIES=OFF
|
||||
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
||||
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose"
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||
-DCMAKE_C_COMPILER_LAUNCHER=ccache
|
||||
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
|
||||
-DCMAKE_C_LINKER_LAUNCHER=ccache
|
||||
-DCMAKE_CXX_COMPILER=clang++
|
||||
-DCMAKE_C_COMPILER=clang
|
||||
-DENABLE_SYSTEM_SNAPPY=ON
|
||||
-DCPACK_GENERATOR=$(CMAKE_CPACK_GENERATOR)
|
||||
-DBUILD_nvidia_plugin=OFF
|
||||
-S $(REPO_DIR)
|
||||
@@ -303,7 +304,7 @@ jobs:
|
||||
|
||||
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
|
||||
- script: |
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.00.00.1910/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
|
||||
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) \
|
||||
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
|
||||
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
|
||||
@@ -313,7 +314,7 @@ jobs:
|
||||
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
|
||||
- script: |
|
||||
# For python imports to import pybind_mock_frontend
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.00.00.1910/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export PYTHONPATH=$(INSTALL_TEST_DIR):$(INSTALL_DIR)/python/python3.8:$PYTHONPATH
|
||||
python3 -m pytest -sv $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) \
|
||||
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
|
||||
@@ -323,7 +324,7 @@ jobs:
|
||||
displayName: 'Python API 2.0 Tests'
|
||||
|
||||
- script: |
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.00.00.1910/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
|
||||
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
|
||||
displayName: 'Model Optimizer UT'
|
||||
|
||||
@@ -361,7 +362,7 @@ jobs:
|
||||
displayName: 'List install files'
|
||||
|
||||
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -i $(INSTALL_DIR) -b $(BUILD_DIR)/cpp_samples
|
||||
displayName: 'Build cpp samples'
|
||||
displayName: 'Build cpp samples - gcc'
|
||||
|
||||
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -b $(BUILD_DIR)/cpp_samples_clang
|
||||
env:
|
||||
@@ -389,17 +390,16 @@ jobs:
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_conditional_compilation_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ConditionalCompilation.xml
|
||||
displayName: 'Conditional Compilation Tests'
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-PaddleTests.xml
|
||||
displayName: 'Paddle Tests'
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_ir_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-IRFrontend.xml
|
||||
displayName: 'IR Frontend Tests'
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ONNXFrontend.xml
|
||||
displayName: 'ONNX Frontend Tests'
|
||||
|
||||
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Paddle.xml
|
||||
displayName: 'Paddle Frontend UT'
|
||||
enabled: 'false'
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Tensorflow.xml
|
||||
displayName: 'TensorFlow Frontend Unit Tests'
|
||||
@@ -430,10 +430,14 @@ jobs:
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_gna_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_gna_unit_tests.xml
|
||||
displayName: 'GNA UT'
|
||||
enabled: 'false' # TODO: fix
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ieMultiPluginUnitTests.xml
|
||||
displayName: 'MULTI UT'
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_auto_batch_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_auto_batch_unit_tests.xml
|
||||
displayName: 'AutoBatch UT'
|
||||
|
||||
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_template_func_tests --gtest_filter=*smoke* --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-templateFuncTests.xml
|
||||
displayName: 'TEMPLATE FuncTests'
|
||||
|
||||
@@ -443,16 +447,10 @@ jobs:
|
||||
|
||||
- script: |
|
||||
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'IE CAPITests'
|
||||
|
||||
- script: |
|
||||
$(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'OV CAPITests'
|
||||
|
||||
- task: CMake@1
|
||||
@@ -527,22 +525,9 @@ jobs:
|
||||
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
|
||||
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
|
||||
export TEST_DEVICE=CPU
|
||||
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/test_mo_convert_complex_params.py --ir_version=11 --junitxml=./TEST-test_mo_convert_complex_params.xmlTEST
|
||||
displayName: 'MO Python API Tests - Complex Python params'
|
||||
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/ --junitxml=./TEST-test_mo_convert.xmlTEST
|
||||
displayName: 'MO Python API Tests'
|
||||
|
||||
- script: |
|
||||
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
|
||||
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
|
||||
export TEST_DEVICE=CPU
|
||||
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/test_mo_convert_tf.py --ir_version=11 --junitxml=./TEST-test_mo_convert_tf.xmlTEST
|
||||
displayName: 'MO Python API Tests - Import TF model from memory'
|
||||
|
||||
- script: |
|
||||
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
|
||||
export PYTHONPATH=$(LAYER_TESTS_DIR):$PYTHONPATH
|
||||
export TEST_DEVICE=CPU
|
||||
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/mo_python_api_tests/test_mo_convert_pytorch.py --ir_version=11 --junitxml=./TEST-test_mo_convert_pytorch.xmlTEST
|
||||
displayName: 'MO Python API Tests - Import PyTorch model from memory'
|
||||
|
||||
- script: |
|
||||
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
|
||||
|
||||
@@ -134,13 +134,11 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
|
||||
@@ -102,7 +102,6 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
@@ -118,7 +117,6 @@ jobs:
|
||||
|
||||
- checkout: testdata
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
lfs: 'true'
|
||||
path: testdata
|
||||
|
||||
|
||||
@@ -82,13 +82,11 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
|
||||
@@ -100,13 +100,11 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
|
||||
@@ -102,14 +102,13 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- script: |
|
||||
set -e
|
||||
sudo -E $(REPO_DIR)/install_build_dependencies.sh
|
||||
# 'clang' compiler is to check that samples can be built using it
|
||||
# 'clang' is used as a default compiler
|
||||
sudo apt --assume-yes install clang
|
||||
sudo apt --assume-yes install --no-install-recommends libopencv-imgproc-dev libopencv-imgcodecs-dev
|
||||
# For opencv-python: python3-setuptools and pip upgrade
|
||||
@@ -143,7 +142,6 @@ jobs:
|
||||
# Should be after 'Install dependencies' because Git lfs is not installed
|
||||
- checkout: testdata
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
lfs: 'true'
|
||||
path: testdata
|
||||
|
||||
@@ -161,6 +159,7 @@ jobs:
|
||||
-DENABLE_TESTS=ON
|
||||
-DENABLE_FASTER_BUILD=ON
|
||||
-DENABLE_STRICT_DEPENDENCIES=OFF
|
||||
-DENABLE_SYSTEM_SNAPPY=ON
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||
-DCMAKE_C_COMPILER_LAUNCHER=ccache
|
||||
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
|
||||
@@ -283,13 +282,13 @@ jobs:
|
||||
displayName: 'Clean build dir'
|
||||
|
||||
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -i $(INSTALL_DIR)
|
||||
displayName: 'Build cpp samples'
|
||||
displayName: 'Build cpp samples - gcc'
|
||||
|
||||
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -i $(INSTALL_DIR)
|
||||
displayName: 'Build cpp samples - clang'
|
||||
env:
|
||||
CC: clang
|
||||
CXX: clang++
|
||||
displayName: 'Build cpp samples - clang'
|
||||
|
||||
- script: $(SAMPLES_INSTALL_DIR)/c/build_samples.sh -i $(INSTALL_DIR)
|
||||
displayName: 'Build c samples'
|
||||
@@ -306,11 +305,12 @@ jobs:
|
||||
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
|
||||
displayName: 'ONNX Frontend Tests'
|
||||
|
||||
- script: |
|
||||
$(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Paddle.xml
|
||||
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
|
||||
- script: $(INSTALL_TEST_DIR)/paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Paddle.xml
|
||||
env:
|
||||
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
|
||||
displayName: 'Paddle Frontend UT'
|
||||
enabled: 'false'
|
||||
|
||||
- script: $(INSTALL_TEST_DIR)/ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-Tensorflow.xml
|
||||
env:
|
||||
|
||||
@@ -30,7 +30,6 @@ jobs:
|
||||
|
||||
# - checkout: self
|
||||
# clean: 'true'
|
||||
# fetchDepth: '1'
|
||||
# submodules: 'true'
|
||||
# path: openvino
|
||||
|
||||
@@ -42,7 +41,6 @@ jobs:
|
||||
# Should be after 'Install dependencies' because Git lfs is not installed
|
||||
# - checkout: testdata
|
||||
# clean: 'true'
|
||||
# fetchDepth: '1'
|
||||
# submodules: 'true'
|
||||
# lfs: 'true'
|
||||
# path: testdata
|
||||
|
||||
@@ -91,7 +91,6 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
|
||||
@@ -101,7 +101,6 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
@@ -171,7 +170,7 @@ jobs:
|
||||
|
||||
- script: |
|
||||
source $(INSTALL_DIR)/setupvars.sh
|
||||
./onnxruntime_shared_lib_test
|
||||
./onnxruntime_shared_lib_test --gtest_filter=-CApiTest.test_custom_op_openvino_wrapper_library
|
||||
workingDirectory: $(ONNXRUNTIME_BUILD_DIR)/RelWithDebInfo
|
||||
displayName: 'Run onnxruntime_shared_lib_test'
|
||||
|
||||
|
||||
@@ -100,19 +100,16 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
- checkout: testdata
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
lfs: 'true'
|
||||
path: testdata
|
||||
|
||||
@@ -143,9 +140,6 @@ jobs:
|
||||
-DBUILD_nvidia_plugin=OFF \
|
||||
-S $(REPO_DIR) \
|
||||
-B $(BUILD_DIR)
|
||||
env:
|
||||
CC: gcc
|
||||
CXX: g++
|
||||
displayName: 'CMake OpenVINO'
|
||||
|
||||
- script: ls -alR $(REPO_DIR)/temp/
|
||||
|
||||
@@ -122,19 +122,16 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
- checkout: testdata
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
lfs: 'true'
|
||||
path: testdata
|
||||
|
||||
@@ -179,6 +176,7 @@ jobs:
|
||||
-DENABLE_STRICT_DEPENDENCIES=OFF ^
|
||||
-DENABLE_PYTHON=ON ^
|
||||
-DBUILD_nvidia_plugin=OFF ^
|
||||
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose" ^
|
||||
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.10.7\x64\python.exe" ^
|
||||
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.10.7\x64\include" ^
|
||||
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.10.7\x64\libs\python310.lib" ^
|
||||
@@ -269,8 +267,9 @@ jobs:
|
||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ONNXFrontend.xml
|
||||
displayName: 'ONNX Frontend Tests'
|
||||
|
||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-Paddle.xml
|
||||
displayName: 'Paddle Frontend UT'
|
||||
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
|
||||
#- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\paddle_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-Paddle.xml
|
||||
# displayName: 'Paddle Frontend UT'
|
||||
|
||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_tensorflow_frontend_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-Tensorflow.xml
|
||||
displayName: 'TensorFlow Frontend Unit Tests'
|
||||
@@ -305,6 +304,9 @@ jobs:
|
||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ieMultiPluginUnitTests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ieMultiPluginUnitTests.xml
|
||||
displayName: 'MULTI UT'
|
||||
|
||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_auto_batch_unit_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_auto_batch_unit_tests.xml
|
||||
displayName: 'AutoBatch UT'
|
||||
|
||||
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_template_func_tests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-templateFuncTests.xml
|
||||
displayName: 'TEMPLATE FuncTests'
|
||||
|
||||
@@ -314,16 +316,10 @@ jobs:
|
||||
|
||||
- script: |
|
||||
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-InferenceEngineCAPITests.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'IE CAPITests'
|
||||
|
||||
- script: |
|
||||
call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-ov_capi_test.xml
|
||||
env:
|
||||
DATA_PATH: $(MODELS_PATH)
|
||||
MODELS_PATH: $(MODELS_PATH)
|
||||
displayName: 'OV CAPITests'
|
||||
|
||||
- task: PublishTestResults@2
|
||||
|
||||
@@ -93,7 +93,6 @@ jobs:
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
fetchDepth: '1'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
@@ -107,7 +106,6 @@ jobs:
|
||||
- checkout: testdata
|
||||
clean: 'true'
|
||||
lfs: 'true'
|
||||
fetchDepth: '1'
|
||||
path: testdata
|
||||
|
||||
- script: |
|
||||
|
||||
114
.github/dependabot.yml
vendored
114
.github/dependabot.yml
vendored
@@ -6,7 +6,7 @@ updates:
|
||||
# Python product dependencies
|
||||
#
|
||||
|
||||
# Python API requirements
|
||||
# Python API, Frontends
|
||||
- package-ecosystem: pip
|
||||
directory: "/src/bindings/python/"
|
||||
schedule:
|
||||
@@ -17,12 +17,33 @@ updates:
|
||||
assignees:
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
- "ceciliapeng2011"
|
||||
- "meiyang-intel"
|
||||
- "mbencer"
|
||||
- "tomdol"
|
||||
- "jane-intel"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
# Tests
|
||||
- package-ecosystem: pip
|
||||
directory: "/tests"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
timezone: "Poland"
|
||||
open-pull-requests-limit: 3
|
||||
assignees:
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
# Model Optimizer requirements
|
||||
# Model Optimizer, openvino_dev and Benchmark tool
|
||||
- package-ecosystem: pip
|
||||
directory: "/tools/mo"
|
||||
directory: "/tools"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
@@ -33,6 +54,8 @@ updates:
|
||||
- "andrei-kochin"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "Wovchena"
|
||||
allow:
|
||||
- dependency-name: "*"
|
||||
dependency-type: "production"
|
||||
@@ -51,89 +74,10 @@ updates:
|
||||
- "KodiaqQ"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
# benchmark_tool requirements
|
||||
- package-ecosystem: pip
|
||||
directory: "/tools/benchmark_tool"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
timezone: "Asia/Dubai"
|
||||
open-pull-requests-limit: 3
|
||||
assignees:
|
||||
- "Wovchena"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
#
|
||||
# Tests requirements for frontends
|
||||
#
|
||||
|
||||
# PaddlePaddle FE tests requirements
|
||||
- package-ecosystem: pip
|
||||
directory: "/src/frontends/paddle/tests/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
timezone: "Asia/Shanghai"
|
||||
open-pull-requests-limit: 3
|
||||
assignees:
|
||||
- "ceciliapeng2011"
|
||||
- "meiyang-intel"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
# ONNX FE tests requirements
|
||||
- package-ecosystem: pip
|
||||
directory: "/src/frontends/onnx/tests/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
timezone: "Poland"
|
||||
open-pull-requests-limit: 3
|
||||
assignees:
|
||||
- "mbencer"
|
||||
- "tomdol"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
# TensorFlow FE tests requirements
|
||||
- package-ecosystem: pip
|
||||
directory: "/src/frontends/tensorflow/tests/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
timezone: "Asia/Dubai"
|
||||
open-pull-requests-limit: 3
|
||||
assignees:
|
||||
- "rkazants"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
# TensorFlow Lite FE tests requirements
|
||||
- package-ecosystem: pip
|
||||
directory: "/src/frontends/tensorflow_lite/tests/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
time: "09:00"
|
||||
timezone: "Asia/Dubai"
|
||||
open-pull-requests-limit: 3
|
||||
assignees:
|
||||
- "jane-intel"
|
||||
- "rkazants"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
#
|
||||
# Python Samples
|
||||
#
|
||||
@@ -149,6 +93,7 @@ updates:
|
||||
- "Wovchena"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
@@ -163,6 +108,7 @@ updates:
|
||||
- "Wovchena"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
@@ -177,6 +123,7 @@ updates:
|
||||
- "Wovchena"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
@@ -191,6 +138,7 @@ updates:
|
||||
- "Wovchena"
|
||||
- "jiwaszki"
|
||||
- "p-wysocki"
|
||||
- "akuporos"
|
||||
- "rkazants"
|
||||
versioning-strategy: increase-if-necessary
|
||||
|
||||
|
||||
4
.github/workflows/build_doc.yml
vendored
4
.github/workflows/build_doc.yml
vendored
@@ -11,7 +11,7 @@ env:
|
||||
DOXYREST_VER: '2.1.3'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
group: ${{ github.workflow }}-${{ github.head_ref && github.ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
@@ -25,7 +25,7 @@ jobs:
|
||||
lfs: true
|
||||
|
||||
- name: Install apt-get dependencies
|
||||
uses: awalsh128/cache-apt-pkgs-action@v1.1.3
|
||||
uses: awalsh128/cache-apt-pkgs-action@v1.3.0
|
||||
with:
|
||||
packages: graphviz texlive liblua5.2-0 libclang1-9 libclang-cpp9
|
||||
version: 3.0
|
||||
|
||||
7
.github/workflows/code_snippets.yml
vendored
7
.github/workflows/code_snippets.yml
vendored
@@ -30,6 +30,13 @@ jobs:
|
||||
submodules: recursive
|
||||
lfs: true
|
||||
|
||||
- name: Install OpenCL
|
||||
uses: awalsh128/cache-apt-pkgs-action@v1.3.0
|
||||
if: runner.os == 'Linux'
|
||||
with:
|
||||
packages: ocl-icd-opencl-dev opencl-headers
|
||||
version: 3.0
|
||||
|
||||
- name: CMake configure
|
||||
run: cmake -DCMAKE_BUILD_TYPE=Release -B build
|
||||
|
||||
|
||||
2
.github/workflows/mo.yml
vendored
2
.github/workflows/mo.yml
vendored
@@ -30,7 +30,7 @@ jobs:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Cache pip
|
||||
uses: actions/cache@v1
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/.cache/pip
|
||||
key: ${{ runner.os }}-pip-${{ hashFiles('tools/mo/requirements*.txt') }}
|
||||
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -57,3 +57,5 @@ __pycache__
|
||||
/tools/mo/*.mapping
|
||||
/tools/mo/*.dat
|
||||
/tools/mo/*.svg
|
||||
/src/plugins/intel_cpu/tools/commit_slider/*.json
|
||||
/src/plugins/intel_cpu/tools/commit_slider/slider_cache/*
|
||||
|
||||
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -66,3 +66,6 @@
|
||||
[submodule "thirdparty/flatbuffers/flatbuffers"]
|
||||
path = thirdparty/flatbuffers/flatbuffers
|
||||
url = https://github.com/google/flatbuffers.git
|
||||
[submodule "thirdparty/snappy"]
|
||||
path = thirdparty/snappy
|
||||
url = https://github.com/google/snappy.git
|
||||
|
||||
@@ -17,12 +17,12 @@ else()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
|
||||
|
||||
if(NOT CMAKE_BUILD_TYPE)
|
||||
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type" FORCE)
|
||||
if(POLICY CMP0091)
|
||||
cmake_policy(SET CMP0091 NEW) # Enables use of MSVC_RUNTIME_LIBRARY
|
||||
endif()
|
||||
|
||||
project(OpenVINO DESCRIPTION "OpenVINO toolkit")
|
||||
|
||||
find_package(IEDevScripts REQUIRED
|
||||
PATHS "${OpenVINO_SOURCE_DIR}/cmake/developer_package"
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
@@ -39,7 +39,6 @@ if(ENABLE_COVERAGE)
|
||||
endif()
|
||||
|
||||
# resolving dependencies for the project
|
||||
message (STATUS "PROJECT ............................... " ${PROJECT_NAME})
|
||||
message (STATUS "CMAKE_VERSION ......................... " ${CMAKE_VERSION})
|
||||
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
|
||||
message (STATUS "CMAKE_SOURCE_DIR ...................... " ${CMAKE_SOURCE_DIR})
|
||||
@@ -48,10 +47,28 @@ message (STATUS "OpenVINO_BINARY_DIR ................... " ${OpenVINO_BINARY_DIR
|
||||
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
|
||||
message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
|
||||
message (STATUS "CMAKE_CXX_COMPILER_ID ................. " ${CMAKE_CXX_COMPILER_ID})
|
||||
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
|
||||
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
|
||||
message (STATUS "GLIBC_VERSION.......................... " ${OV_GLIBC_VERSION})
|
||||
|
||||
if(OV_GENERATOR_MULTI_CONFIG)
|
||||
string(REPLACE ";" " " config_types "${CMAKE_CONFIGURATION_TYPES}")
|
||||
message (STATUS "CMAKE_CONFIGURATION_TYPES ............. " ${config_types})
|
||||
unset(config_types)
|
||||
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
|
||||
message (STATUS "CMAKE_DEFAULT_BUILD_TYPE .............. " ${CMAKE_DEFAULT_BUILD_TYPE})
|
||||
endif()
|
||||
else()
|
||||
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
|
||||
endif()
|
||||
if(CMAKE_GENERATOR_PLATFORM)
|
||||
message (STATUS "CMAKE_GENERATOR_PLATFORM .............. " ${CMAKE_GENERATOR_PLATFORM})
|
||||
endif()
|
||||
if(CMAKE_GENERATOR_TOOLSET)
|
||||
message (STATUS "CMAKE_GENERATOR_TOOLSET ............... " ${CMAKE_GENERATOR_TOOLSET})
|
||||
endif()
|
||||
if(CMAKE_TOOLCHAIN_FILE)
|
||||
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
|
||||
endif()
|
||||
if(OV_GLIBC_VERSION)
|
||||
message (STATUS "GLIBC_VERSION ......................... " ${OV_GLIBC_VERSION})
|
||||
endif()
|
||||
|
||||
# remove file with exported developer targets to force its regeneration
|
||||
file(REMOVE "${CMAKE_BINARY_DIR}/ngraphTargets.cmake")
|
||||
|
||||
@@ -163,7 +163,7 @@ The system requirements vary depending on platform and are available on dedicate
|
||||
|
||||
## How to build
|
||||
|
||||
See the [OpenVINO Wiki](https://github.com/openvinotoolkit/openvino/wiki#how-to-build) to get more information about the OpenVINO build process.
|
||||
See [How to build OpenVINO](./docs/dev/build.md) to get more information about the OpenVINO build process.
|
||||
|
||||
## How to contribute
|
||||
|
||||
|
||||
@@ -97,10 +97,10 @@ function(ov_download_tbb)
|
||||
if(WIN32 AND X86_64)
|
||||
# TODO: add target_path to be platform specific as well, to avoid following if
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_WIN "tbb2020_617e9a71_win.zip"
|
||||
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win.zip"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "01cac3cc48705bd52b83a6e1fa1ed95c708928be76160f5b9c5c37f954d56df4"
|
||||
SHA256 "d81591673bd7d3d9454054642f8ef799e1fdddc7b4cee810a95e6130eb7323d4"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(ANDROID AND X86_64)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
@@ -110,10 +110,10 @@ function(ov_download_tbb)
|
||||
SHA256 "f42d084224cc2d643314bd483ad180b081774608844000f132859fca3e9bf0ce")
|
||||
elseif(LINUX AND X86_64)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_LIN "tbb2020_617e9a71_lin_strip.tgz"
|
||||
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin.tgz"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "e7a38f68059fb36de8b59d40b283a849f26275e34a58d2acadfdb84d49e31b9b"
|
||||
SHA256 "0a56f73baaa40d72e06949ea6d593ae63a19f7580ce71c08287c1f59d2e5b988"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(YOCTO_AARCH64)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
@@ -123,10 +123,10 @@ function(ov_download_tbb)
|
||||
SHA256 "321261ff2eda6d4568a473cb883262bce77a93dac599f7bd65d2918bdee4d75b")
|
||||
elseif(APPLE AND X86_64)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_MAC "tbb2020_617e9a71_mac.tgz"
|
||||
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac.tgz"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "67a44b695bef3348416eaf5bf2baca2b1401576c0e09c394304eba1e0eee96cd"
|
||||
SHA256 "c57ce4b97116cd3093c33e6dcc147fb1bbb9678d0ee6c61a506b2bfe773232cb"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
else()
|
||||
message(WARNING "Prebuilt TBB is not available on current platform")
|
||||
@@ -177,16 +177,18 @@ function(ov_download_tbbbind_2_5)
|
||||
|
||||
if(WIN32 AND X86_64)
|
||||
RESOLVE_DEPENDENCY(TBBBIND_2_5
|
||||
ARCHIVE_WIN "tbbbind_2_5_static_win_v1.zip"
|
||||
ARCHIVE_WIN "tbbbind_2_5_static_win_v2.zip"
|
||||
TARGET_PATH "${TEMP}/tbbbind_2_5"
|
||||
ENVIRONMENT "TBBBIND_2_5_ROOT"
|
||||
SHA256 "a67afeea8cf194f97968c800dab5b5459972908295242e282045d6b8953573c1")
|
||||
SHA256 "49ae93b13a13953842ff9ae8d01681b269b5b0bc205daf18619ea9a828c44bee"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(LINUX AND X86_64)
|
||||
RESOLVE_DEPENDENCY(TBBBIND_2_5
|
||||
ARCHIVE_LIN "tbbbind_2_5_static_lin_v2.tgz"
|
||||
ARCHIVE_LIN "tbbbind_2_5_static_lin_v3.tgz"
|
||||
TARGET_PATH "${TEMP}/tbbbind_2_5"
|
||||
ENVIRONMENT "TBBBIND_2_5_ROOT"
|
||||
SHA256 "865e7894c58402233caf0d1b288056e0e6ab2bf7c9d00c9dc60561c484bc90f4")
|
||||
SHA256 "d39deb262c06981b5e2d2e3c593e9fc9be62ce4feb91dd4e648e92753659a6b3"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
else()
|
||||
# TMP: for Apple Silicon TBB does not provide TBBBind
|
||||
if(NOT (APPLE AND AARCH64))
|
||||
@@ -298,8 +300,8 @@ if(ENABLE_INTEL_GNA)
|
||||
GNA_LIB_DIR
|
||||
libGNA_INCLUDE_DIRS
|
||||
libGNA_LIBRARIES_BASE_PATH)
|
||||
set(GNA_VERSION "03.00.00.1910")
|
||||
set(GNA_HASH "894ddbc0ae3459f04513b853b0cabc32890dd4ea37228a022b6a32101bdbb7f8")
|
||||
set(GNA_VERSION "03.05.00.1906")
|
||||
set(GNA_HASH "4a5be86d9c026b0e10afac2a57fc7c99d762b30e3d506abb3a3380fbcfe2726e")
|
||||
|
||||
set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
|
||||
if(WIN32)
|
||||
|
||||
@@ -24,7 +24,6 @@ function(set_ci_build_number)
|
||||
endfunction()
|
||||
|
||||
include(features)
|
||||
include(message)
|
||||
|
||||
set_ci_build_number()
|
||||
|
||||
@@ -112,10 +111,13 @@ else()
|
||||
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
|
||||
endif()
|
||||
|
||||
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type")
|
||||
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Release;Debug;RelWithDebInfo;MinSizeRel")
|
||||
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
|
||||
# Ninja-Multi specific, see:
|
||||
# https://cmake.org/cmake/help/latest/variable/CMAKE_DEFAULT_BUILD_TYPE.html
|
||||
set(CMAKE_DEFAULT_BUILD_TYPE "Release" CACHE STRING "CMake default build type")
|
||||
elseif(NOT OV_GENERATOR_MULTI_CONFIG)
|
||||
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "CMake build type")
|
||||
set_property(CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS "Release;Debug;RelWithDebInfo;MinSizeRel")
|
||||
endif()
|
||||
|
||||
if(USE_BUILD_TYPE_SUBFOLDER)
|
||||
@@ -153,10 +155,10 @@ set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
|
||||
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
|
||||
|
||||
# Support CMake multi-configuration for Visual Studio / Ninja or Xcode build
|
||||
if (OV_GENERATOR_MULTI_CONFIG)
|
||||
if(OV_GENERATOR_MULTI_CONFIG)
|
||||
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
|
||||
else ()
|
||||
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
|
||||
else()
|
||||
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
|
||||
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
|
||||
else()
|
||||
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
|
||||
|
||||
@@ -5,60 +5,99 @@
|
||||
if(WIN32)
|
||||
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
|
||||
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
|
||||
set(UWP_SDK_PATH "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64")
|
||||
|
||||
message(STATUS "Trying to find apivalidator in: ${UWP_SDK_PATH}")
|
||||
find_host_program(UWP_API_VALIDATOR
|
||||
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
|
||||
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
|
||||
|
||||
message(STATUS "Trying to find apivalidator in: ")
|
||||
foreach(wdk_path IN LISTS WDK_PATHS)
|
||||
message(" * ${wdk_path}")
|
||||
endforeach()
|
||||
|
||||
find_host_program(ONECORE_API_VALIDATOR
|
||||
NAMES apivalidator
|
||||
PATHS "${UWP_SDK_PATH}"
|
||||
DOC "ApiValidator for UWP compliance")
|
||||
PATHS ${WDK_PATHS}
|
||||
DOC "ApiValidator for OneCore compliance")
|
||||
|
||||
if(UWP_API_VALIDATOR)
|
||||
message(STATUS "Found apivalidator: ${UWP_API_VALIDATOR}")
|
||||
if(ONECORE_API_VALIDATOR)
|
||||
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
function(_ie_add_api_validator_post_build_step_recursive)
|
||||
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
|
||||
|
||||
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
|
||||
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
|
||||
|
||||
get_target_property(IS_IMPORTED ${API_VALIDATOR_TARGET} IMPORTED)
|
||||
if(IS_IMPORTED)
|
||||
return()
|
||||
endif()
|
||||
|
||||
get_target_property(LIBRARY_TYPE ${API_VALIDATOR_TARGET} TYPE)
|
||||
if(LIBRARY_TYPE STREQUAL "EXECUTABLE" OR LIBRARY_TYPE STREQUAL "SHARED_LIBRARY")
|
||||
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
|
||||
if(LINKED_LIBRARIES)
|
||||
foreach(ITEM IN LISTS LINKED_LIBRARIES)
|
||||
if(NOT TARGET ${ITEM})
|
||||
continue()
|
||||
endif()
|
||||
get_target_property(LIBRARY_TYPE_DEPENDENCY ${ITEM} TYPE)
|
||||
if(LIBRARY_TYPE_DEPENDENCY STREQUAL "SHARED_LIBRARY")
|
||||
_ie_add_api_validator_post_build_step_recursive(TARGET ${ITEM})
|
||||
endif()
|
||||
endforeach()
|
||||
endif()
|
||||
if(LIBRARY_TYPE MATCHES "^(SHARED_LIBRARY|MODULE_LIBRARY|EXECUTABLE)$" AND
|
||||
NOT ${API_VALIDATOR_TARGET} IN_LIST API_VALIDATOR_TARGETS)
|
||||
list(APPEND API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGET})
|
||||
endif()
|
||||
# keep checks target list to track cyclic dependencies, leading to infinite recursion
|
||||
list(APPEND checked_targets ${API_VALIDATOR_TARGET})
|
||||
|
||||
if(NOT LIBRARY_TYPE STREQUAL "INTERFACE_LIBRARY")
|
||||
get_target_property(LINKED_LIBRARIES ${API_VALIDATOR_TARGET} LINK_LIBRARIES)
|
||||
else()
|
||||
set(LINKED_LIBRARIES)
|
||||
endif()
|
||||
get_target_property(INTERFACE_LINKED_LIBRARIES ${API_VALIDATOR_TARGET} INTERFACE_LINK_LIBRARIES)
|
||||
|
||||
foreach(library IN LISTS LINKED_LIBRARIES INTERFACE_LINKED_LIBRARIES)
|
||||
if(TARGET "${library}")
|
||||
get_target_property(orig_library ${library} ALIASED_TARGET)
|
||||
if(orig_library IN_LIST checked_targets OR library IN_LIST checked_targets)
|
||||
# in case of cyclic dependencies, we need to skip current target
|
||||
continue()
|
||||
endif()
|
||||
if(TARGET "${orig_library}")
|
||||
_ie_add_api_validator_post_build_step_recursive(TARGET ${orig_library})
|
||||
else()
|
||||
_ie_add_api_validator_post_build_step_recursive(TARGET ${library})
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
set(API_VALIDATOR_TARGETS ${API_VALIDATOR_TARGETS} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
set(VALIDATED_LIBRARIES "" CACHE INTERNAL "")
|
||||
set(VALIDATED_TARGETS "" CACHE INTERNAL "")
|
||||
|
||||
function(_ov_add_api_validator_post_build_step)
|
||||
set(UWP_API_VALIDATOR_APIS "${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/x64/UniversalDDIs.xml")
|
||||
set(UWP_API_VALIDATOR_EXCLUSION "${UWP_SDK_PATH}/BinaryExclusionlist.xml")
|
||||
|
||||
if((NOT UWP_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
|
||||
if((NOT ONECORE_API_VALIDATOR) OR (WINDOWS_STORE OR WINDOWS_PHONE))
|
||||
return()
|
||||
endif()
|
||||
|
||||
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "" ${ARGN})
|
||||
# see https://learn.microsoft.com/en-us/windows-hardware/drivers/develop/validating-windows-drivers#known-apivalidator-issues
|
||||
# ApiValidator does not run on Arm64 because AitStatic does not work on Arm64
|
||||
if(HOST_AARCH64)
|
||||
return()
|
||||
endif()
|
||||
|
||||
if(X86_64)
|
||||
set(wdk_platform "x64")
|
||||
elseif(X86)
|
||||
set(wdk_platform "x86")
|
||||
elseif(ARM)
|
||||
set(wdk_platform "arm")
|
||||
elseif(AARCH64)
|
||||
set(wdk_platform "arm64")
|
||||
else()
|
||||
message(FATAL_ERROR "Unknown configuration: ${CMAKE_HOST_SYSTEM_PROCESSOR}")
|
||||
endif()
|
||||
|
||||
find_file(ONECORE_API_VALIDATOR_APIS NAMES UniversalDDIs.xml
|
||||
PATHS "${PROGRAMFILES}/Windows Kits/10/build/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/universalDDIs/${wdk_platform}"
|
||||
"${PROGRAMFILES}/Windows Kits/10/build/universalDDIs/${wdk_platform}"
|
||||
DOC "Path to UniversalDDIs.xml file")
|
||||
find_file(ONECORE_API_VALIDATOR_EXCLUSION NAMES BinaryExclusionlist.xml
|
||||
PATHS ${WDK_PATHS}
|
||||
DOC "Path to BinaryExclusionlist.xml file")
|
||||
|
||||
if(NOT ONECORE_API_VALIDATOR_APIS)
|
||||
message(FATAL_ERROR "Internal error: apiValidator is found (${ONECORE_API_VALIDATOR}), but UniversalDDIs.xml file has not been found for ${wdk_platform} platform")
|
||||
endif()
|
||||
|
||||
cmake_parse_arguments(API_VALIDATOR "" "TARGET" "EXTRA" "" ${ARGN})
|
||||
|
||||
if(NOT API_VALIDATOR_TARGET)
|
||||
message(FATAL_ERROR "RunApiValidator requires TARGET to validate!")
|
||||
@@ -69,74 +108,81 @@ function(_ov_add_api_validator_post_build_step)
|
||||
endif()
|
||||
|
||||
# collect targets
|
||||
|
||||
_ie_add_api_validator_post_build_step_recursive(TARGET ${API_VALIDATOR_TARGET})
|
||||
if (API_VALIDATOR_EXTRA)
|
||||
foreach(target IN LISTS API_VALIDATOR_EXTRA)
|
||||
_ie_add_api_validator_post_build_step_recursive(TARGET ${target})
|
||||
endforeach()
|
||||
endif()
|
||||
|
||||
# remove targets which were tested before
|
||||
foreach(target IN LISTS API_VALIDATOR_TARGETS)
|
||||
list(FIND VALIDATED_LIBRARIES ${target} index)
|
||||
if (NOT index EQUAL -1)
|
||||
list(APPEND VALIDATED_TARGETS ${target})
|
||||
endif()
|
||||
if(TARGET "${target}")
|
||||
get_target_property(orig_target ${target} ALIASED_TARGET)
|
||||
list(FIND VALIDATED_LIBRARIES ${orig_target} index)
|
||||
if (NOT index EQUAL -1)
|
||||
list(APPEND VALIDATED_TARGETS ${target})
|
||||
endif()
|
||||
endif()
|
||||
endforeach()
|
||||
foreach(item IN LISTS VALIDATED_TARGETS)
|
||||
list(REMOVE_ITEM API_VALIDATOR_TARGETS ${item})
|
||||
endforeach()
|
||||
|
||||
list(REMOVE_DUPLICATES API_VALIDATOR_TARGETS)
|
||||
|
||||
if(NOT API_VALIDATOR_TARGETS)
|
||||
return()
|
||||
endif()
|
||||
|
||||
# apply check
|
||||
|
||||
macro(api_validator_get_target_name)
|
||||
get_target_property(IS_IMPORTED ${target} IMPORTED)
|
||||
get_target_property(is_imported ${target} IMPORTED)
|
||||
get_target_property(orig_target ${target} ALIASED_TARGET)
|
||||
if(IS_IMPORTED)
|
||||
get_target_property(target_location ${target} LOCATION)
|
||||
get_filename_component(target_name "${target_location}" NAME_WE)
|
||||
if(is_imported)
|
||||
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
|
||||
foreach(imported_config RELEASE RELWITHDEBINFO DEBUG)
|
||||
if(imported_config IN_LIST imported_configs)
|
||||
get_target_property(target_location ${target} IMPORTED_LOCATION_${imported_config})
|
||||
get_filename_component(target_name "${target_location}" NAME_WE)
|
||||
break()
|
||||
endif()
|
||||
endforeach()
|
||||
unset(imported_configs)
|
||||
elseif(TARGET "${orig_target}")
|
||||
set(target_name ${orig_target})
|
||||
set(target_location $<TARGET_FILE:${orig_target}>)
|
||||
else()
|
||||
set(target_name ${target})
|
||||
set(target_location $<TARGET_FILE:${target}>)
|
||||
endif()
|
||||
|
||||
unset(orig_target)
|
||||
unset(is_imported)
|
||||
endmacro()
|
||||
|
||||
foreach(target IN LISTS API_VALIDATOR_TARGETS)
|
||||
api_validator_get_target_name()
|
||||
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.21 AND OV_GENERATOR_MULTI_CONFIG)
|
||||
set(output_file "${CMAKE_BINARY_DIR}/api_validator/$<CONFIG>/${target_name}.txt")
|
||||
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.20 AND OV_GENERATOR_MULTI_CONFIG)
|
||||
set(output_file "${OpenVINO_BINARY_DIR}/api_validator/$<CONFIG>/${target_name}.txt")
|
||||
else()
|
||||
set(output_file "${CMAKE_BINARY_DIR}/api_validator/${target_name}.txt")
|
||||
set(output_file "${OpenVINO_BINARY_DIR}/api_validator/${target_name}.txt")
|
||||
endif()
|
||||
|
||||
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} --config $<CONFIG>
|
||||
-D UWP_API_VALIDATOR=${UWP_API_VALIDATOR}
|
||||
-D UWP_API_VALIDATOR_TARGET=$<TARGET_FILE:${target}>
|
||||
-D UWP_API_VALIDATOR_APIS=${UWP_API_VALIDATOR_APIS}
|
||||
-D UWP_API_VALIDATOR_EXCLUSION=${UWP_API_VALIDATOR_EXCLUSION}
|
||||
-D UWP_API_VALIDATOR_OUTPUT=${output_file}
|
||||
list(APPEND post_build_commands
|
||||
${CMAKE_COMMAND} --config $<CONFIG>
|
||||
-D ONECORE_API_VALIDATOR=${ONECORE_API_VALIDATOR}
|
||||
-D ONECORE_API_VALIDATOR_TARGET=${target_location}
|
||||
-D ONECORE_API_VALIDATOR_APIS=${ONECORE_API_VALIDATOR_APIS}
|
||||
-D ONECORE_API_VALIDATOR_EXCLUSION=${ONECORE_API_VALIDATOR_EXCLUSION}
|
||||
-D ONECORE_API_VALIDATOR_OUTPUT=${output_file}
|
||||
-D CMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
|
||||
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake"
|
||||
BYPRODUCTS ${output_file}
|
||||
COMMENT "[apiValidator] Check ${target_name} for OneCore compliance"
|
||||
VERBATIM)
|
||||
-P "${IEDevScripts_DIR}/api_validator/api_validator_run.cmake")
|
||||
list(APPEND byproducts_files ${output_file})
|
||||
|
||||
unset(target_name)
|
||||
unset(target_location)
|
||||
endforeach()
|
||||
|
||||
add_custom_command(TARGET ${API_VALIDATOR_TARGET} POST_BUILD
|
||||
COMMAND ${post_build_commands}
|
||||
BYPRODUCTS ${byproducts_files}
|
||||
COMMENT "[apiValidator] Check ${API_VALIDATOR_TARGET} and dependencies for OneCore compliance"
|
||||
VERBATIM)
|
||||
|
||||
# update list of validated libraries
|
||||
|
||||
list(APPEND VALIDATED_LIBRARIES ${API_VALIDATOR_TARGETS})
|
||||
set(VALIDATED_LIBRARIES "${VALIDATED_LIBRARIES}" CACHE INTERNAL "" FORCE)
|
||||
list(APPEND VALIDATED_TARGETS ${API_VALIDATOR_TARGETS})
|
||||
set(VALIDATED_TARGETS "${VALIDATED_TARGETS}" CACHE INTERNAL "" FORCE)
|
||||
endfunction()
|
||||
|
||||
#
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
|
||||
cmake_policy(SET CMP0012 NEW)
|
||||
|
||||
foreach(var UWP_API_VALIDATOR UWP_API_VALIDATOR_TARGET
|
||||
UWP_API_VALIDATOR_APIS UWP_API_VALIDATOR_EXCLUSION
|
||||
UWP_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
|
||||
foreach(var ONECORE_API_VALIDATOR ONECORE_API_VALIDATOR_TARGET
|
||||
ONECORE_API_VALIDATOR_APIS ONECORE_API_VALIDATOR_EXCLUSION
|
||||
ONECORE_API_VALIDATOR_OUTPUT CMAKE_TOOLCHAIN_FILE)
|
||||
if(NOT DEFINED ${var})
|
||||
message(FATAL_ERROR "Variable ${var} is not defined")
|
||||
endif()
|
||||
@@ -14,18 +14,18 @@ endforeach()
|
||||
|
||||
# create command
|
||||
|
||||
if(NOT EXISTS "${UWP_API_VALIDATOR_APIS}")
|
||||
message(FATAL_ERROR "${UWP_API_VALIDATOR_APIS} does not exist")
|
||||
if(NOT EXISTS "${ONECORE_API_VALIDATOR_APIS}")
|
||||
message(FATAL_ERROR "${ONECORE_API_VALIDATOR_APIS} does not exist")
|
||||
endif()
|
||||
|
||||
set(command "${UWP_API_VALIDATOR}"
|
||||
-SupportedApiXmlFiles:${UWP_API_VALIDATOR_APIS}
|
||||
-DriverPackagePath:${UWP_API_VALIDATOR_TARGET})
|
||||
if(EXISTS "${UWP_API_VALIDATOR_EXCLUSION}")
|
||||
set(command "${ONECORE_API_VALIDATOR}"
|
||||
-SupportedApiXmlFiles:${ONECORE_API_VALIDATOR_APIS}
|
||||
-DriverPackagePath:${ONECORE_API_VALIDATOR_TARGET})
|
||||
if(EXISTS "${ONECORE_API_VALIDATOR_EXCLUSION}")
|
||||
list(APPEND command
|
||||
-BinaryExclusionListXmlFile:${UWP_API_VALIDATOR_EXCLUSION}
|
||||
-BinaryExclusionListXmlFile:${ONECORE_API_VALIDATOR_EXCLUSION}
|
||||
-StrictCompliance:TRUE)
|
||||
set(UWP_HAS_BINARY_EXCLUSION ON)
|
||||
set(ONECORE_HAS_BINARY_EXCLUSION ON)
|
||||
endif()
|
||||
|
||||
# execute
|
||||
@@ -36,13 +36,13 @@ execute_process(COMMAND ${command}
|
||||
RESULT_VARIABLE exit_code
|
||||
OUTPUT_STRIP_TRAILING_WHITESPACE)
|
||||
|
||||
file(WRITE "${UWP_API_VALIDATOR_OUTPUT}" "${output_message}\n\n\n${error_message}")
|
||||
file(WRITE "${ONECORE_API_VALIDATOR_OUTPUT}" "CMAKE COMMAND: ${command}\n\n\n${output_message}\n\n\n${error_message}")
|
||||
|
||||
# post-process output
|
||||
|
||||
get_filename_component(name "${UWP_API_VALIDATOR_TARGET}" NAME)
|
||||
get_filename_component(name "${ONECORE_API_VALIDATOR_TARGET}" NAME)
|
||||
|
||||
if(NOT UWP_HAS_BINARY_EXCLUSION)
|
||||
if(NOT ONECORE_HAS_BINARY_EXCLUSION)
|
||||
if(CMAKE_TOOLCHAIN_FILE MATCHES "onecoreuap.toolchain.cmake$")
|
||||
# empty since we compile with static MSVC runtime
|
||||
else()
|
||||
@@ -66,7 +66,7 @@ endif()
|
||||
|
||||
# write output
|
||||
|
||||
if(UWP_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
|
||||
if(ONECORE_HAS_BINARY_EXCLUSION AND NOT exit_code EQUAL 0)
|
||||
message(FATAL_ERROR "${error_message}")
|
||||
endif()
|
||||
|
||||
|
||||
52
cmake/developer_package/check_python_requirements.py
Normal file
52
cmake/developer_package/check_python_requirements.py
Normal file
@@ -0,0 +1,52 @@
|
||||
import pkg_resources
|
||||
import re
|
||||
import os
|
||||
|
||||
|
||||
def check_python_requirements(requirements_path: str) -> None:
|
||||
"""
|
||||
Checks if the requirements defined in `requirements_path` are installed
|
||||
in the active Python environment, while also taking constraints.txt files
|
||||
into account.
|
||||
"""
|
||||
|
||||
constraints = {}
|
||||
constraints_path = None
|
||||
requirements = []
|
||||
|
||||
# read requirements and find constraints file
|
||||
with open(requirements_path) as f:
|
||||
raw_requirements = f.readlines()
|
||||
for line in raw_requirements:
|
||||
if line.startswith("-c"):
|
||||
constraints_path = os.path.join(os.path.dirname(requirements_path), line.split(' ')[1][:-1])
|
||||
|
||||
# read constraints if they exist
|
||||
if constraints_path:
|
||||
with open(constraints_path) as f:
|
||||
raw_constraints = f.readlines()
|
||||
for line in raw_constraints:
|
||||
if line.startswith("#") or line=="\n":
|
||||
continue
|
||||
line = line.replace("\n", "")
|
||||
package, delimiter, constraint = re.split("(~|=|<|>|;)", line, maxsplit=1)
|
||||
if constraints.get(package) is None:
|
||||
constraints[package] = [delimiter + constraint]
|
||||
else:
|
||||
constraints[package].extend([delimiter + constraint])
|
||||
for line in raw_requirements:
|
||||
if line.startswith(("#", "-c")):
|
||||
continue
|
||||
line = line.replace("\n", "")
|
||||
if re.search("\W", line):
|
||||
requirements.append(line)
|
||||
else:
|
||||
constraint = constraints.get(line)
|
||||
if constraint:
|
||||
for marker in constraint:
|
||||
requirements.append(line+marker)
|
||||
else:
|
||||
requirements.append(line)
|
||||
else:
|
||||
requirements = raw_requirements
|
||||
pkg_resources.require(requirements)
|
||||
@@ -3,23 +3,23 @@
|
||||
#
|
||||
|
||||
if(ENABLE_CLANG_FORMAT)
|
||||
set(clang_format_required_version 9)
|
||||
set(CLANG_FORMAT_FILENAME clang-format-${clang_format_required_version} clang-format)
|
||||
set(CLANG_FORMAT_REQUIRED_VERSION 9 CACHE STRING "Clang-format version to use")
|
||||
set(CLANG_FORMAT_FILENAME clang-format-${CLANG_FORMAT_REQUIRED_VERSION} clang-format)
|
||||
find_host_program(CLANG_FORMAT NAMES ${CLANG_FORMAT_FILENAME} PATHS ENV PATH)
|
||||
if(CLANG_FORMAT)
|
||||
execute_process(COMMAND ${CLANG_FORMAT} ${CMAKE_CURRENT_SOURCE_DIR} ARGS --version OUTPUT_VARIABLE CLANG_VERSION)
|
||||
if(NOT CLANG_VERSION)
|
||||
message(WARNING "Supported clang-format version is ${clang_format_required_version}!")
|
||||
message(WARNING "Supported clang-format version is ${CLANG_FORMAT_REQUIRED_VERSION}!")
|
||||
set(ENABLE_CLANG_FORMAT OFF)
|
||||
else()
|
||||
string(REGEX REPLACE "[^0-9]+([0-9]+)\\..*" "\\1" CLANG_FORMAT_MAJOR_VERSION ${CLANG_VERSION})
|
||||
if(NOT CLANG_FORMAT_MAJOR_VERSION EQUAL clang_format_required_version)
|
||||
if(NOT CLANG_FORMAT_MAJOR_VERSION EQUAL CLANG_FORMAT_REQUIRED_VERSION)
|
||||
message(WARNING "Supported clang-format version is 9! Provided version ${CLANG_FORMAT_MAJOR_VERSION}")
|
||||
set(ENABLE_CLANG_FORMAT OFF)
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(WARNING "Supported clang-format-${clang_format_required_version} is not found!")
|
||||
message(WARNING "Supported clang-format-${CLANG_FORMAT_REQUIRED_VERSION} is not found!")
|
||||
set(ENABLE_CLANG_FORMAT OFF)
|
||||
endif()
|
||||
endif()
|
||||
@@ -70,6 +70,10 @@ function(add_clang_format_target TARGET_NAME)
|
||||
continue()
|
||||
endif()
|
||||
|
||||
if(IS_DIRECTORY "${source_file}")
|
||||
message(FATAL_ERROR "Directory ${source_file} cannot be passed to clang-format")
|
||||
endif()
|
||||
|
||||
file(RELATIVE_PATH source_file_relative "${CMAKE_CURRENT_SOURCE_DIR}" "${source_file}")
|
||||
set(output_file "${CMAKE_CURRENT_BINARY_DIR}/clang_format/${source_file_relative}.clang")
|
||||
string(REPLACE ".." "__" output_file "${output_file}")
|
||||
|
||||
@@ -412,11 +412,6 @@ else()
|
||||
# Warn if an undefined identifier is evaluated in an #if directive. Such identifiers are replaced with zero.
|
||||
ie_add_compiler_flags(-Wundef)
|
||||
|
||||
check_cxx_compiler_flag("-Wsuggest-override" SUGGEST_OVERRIDE_SUPPORTED)
|
||||
if(SUGGEST_OVERRIDE_SUPPORTED)
|
||||
set(CMAKE_CXX_FLAGS "-Wsuggest-override ${CMAKE_CXX_FLAGS}")
|
||||
endif()
|
||||
|
||||
#
|
||||
# Warnings as errors
|
||||
#
|
||||
@@ -460,14 +455,13 @@ else()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# if(OV_COMPILER_IS_CLANG)
|
||||
# ie_add_compiler_flags(-Wshorten-64-to-32)
|
||||
# endif()
|
||||
# TODO
|
||||
if(OV_COMPILER_IS_CLANG)
|
||||
ie_add_compiler_flags(-Wno-delete-non-abstract-non-virtual-dtor)
|
||||
check_cxx_compiler_flag("-Wsuggest-override" SUGGEST_OVERRIDE_SUPPORTED)
|
||||
if(SUGGEST_OVERRIDE_SUPPORTED)
|
||||
set(CMAKE_CXX_FLAGS "-Wsuggest-override ${CMAKE_CXX_FLAGS}")
|
||||
endif()
|
||||
|
||||
check_cxx_compiler_flag("-Wunused-but-set-variable" UNUSED_BUT_SET_VARIABLE_SUPPORTED)
|
||||
|
||||
#
|
||||
# link_system_libraries(target <PUBLIC | PRIVATE | INTERFACE> <lib1 [lib2 lib3 ...]>)
|
||||
#
|
||||
@@ -499,6 +493,11 @@ endfunction()
|
||||
# Tries to use gold linker in current scope (directory, function)
|
||||
#
|
||||
function(ov_try_use_gold_linker)
|
||||
# don't use the gold linker, if the mold linker is set
|
||||
if(CMAKE_EXE_LINKER_FLAGS MATCHES "mold" OR CMAKE_MODULE_LINKER_FLAGS MATCHES "mold" OR CMAKE_SHARED_LINKER_FLAGS MATCHES "mold")
|
||||
return()
|
||||
endif()
|
||||
|
||||
# gold linker on ubuntu20.04 may fail to link binaries build with sanitizer
|
||||
if(CMAKE_COMPILER_IS_GNUCXX AND NOT ENABLE_SANITIZER AND NOT CMAKE_CROSSCOMPILING)
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fuse-ld=gold" PARENT_SCOPE)
|
||||
|
||||
@@ -33,7 +33,7 @@ if (ENABLE_UB_SANITIZER)
|
||||
# https://github.com/KhronosGroup/OpenCL-CLHPP/issues/17
|
||||
# Mute -fsanitize=function Indirect call of a function through a function pointer of the wrong type.
|
||||
# Sample cases:
|
||||
# call to function GetAPIVersion through pointer to incorrect function type 'void *(*)()'
|
||||
# call to function get_api_version through pointer to incorrect function type 'void *(*)()'
|
||||
# Mute -fsanitize=alignment Use of a misaligned pointer or creation of a misaligned reference. Also sanitizes assume_aligned-like attributes.
|
||||
# Sample cases:
|
||||
# VPU_FixedMaxHeapTest.DefaultConstructor test case load of misaligned address 0x62000000187f for type 'const DataType', which requires 4 byte alignment
|
||||
|
||||
@@ -50,6 +50,9 @@ else()
|
||||
if(ENABLE_INTEGRITYCHECK)
|
||||
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
|
||||
endif()
|
||||
if(ENABLE_QSPECTRE)
|
||||
ie_add_compiler_flags(/Qspectre)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
|
||||
|
||||
@@ -8,7 +8,7 @@ include(target_flags)
|
||||
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
|
||||
# this must be addressed in a proper way
|
||||
ie_dependent_option (ENABLE_LTO "Enable Link Time Optimization" OFF
|
||||
"LINUX OR (APPLE AND AARCH64);EMSCRIPTEN OR NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
|
||||
"LINUX;EMSCRIPTEN OR NOT CMAKE_CROSSCOMPILING;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.9" OFF)
|
||||
|
||||
ie_option (OS_FOLDER "create OS dedicated folder in output" OFF)
|
||||
|
||||
@@ -26,6 +26,8 @@ endif()
|
||||
|
||||
ie_option (CMAKE_COMPILE_WARNING_AS_ERROR "Enable warnings as errors" ${CMAKE_COMPILE_WARNING_AS_ERROR_DEFAULT})
|
||||
|
||||
ie_dependent_option (ENABLE_QSPECTRE "Enable Qspectre mitigation" OFF "CMAKE_CXX_COMPILER_ID STREQUAL MSVC" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_INTEGRITYCHECK "build DLLs with /INTEGRITYCHECK flag" OFF "CMAKE_CXX_COMPILER_ID STREQUAL MSVC" OFF)
|
||||
|
||||
ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)
|
||||
|
||||
@@ -15,8 +15,8 @@ set(OV_FRONTEND_MAP_DEFINITION " FrontendsStaticRegistry registry = {")
|
||||
|
||||
foreach(frontend IN LISTS FRONTEND_NAMES)
|
||||
# common
|
||||
set(_OV_FRONTEND_DATA_FUNC "GetFrontEndData${frontend}")
|
||||
set(_OV_VERSION_FUNC "GetAPIVersion${frontend}")
|
||||
set(_OV_FRONTEND_DATA_FUNC "get_front_end_data_${frontend}")
|
||||
set(_OV_VERSION_FUNC "get_api_version_${frontend}")
|
||||
|
||||
# declarations
|
||||
set(OV_FRONTEND_DECLARATIONS "${OV_FRONTEND_DECLARATIONS}
|
||||
|
||||
@@ -182,7 +182,7 @@ macro(ov_add_frontend)
|
||||
add_library(openvino::frontend::${OV_FRONTEND_NAME} ALIAS ${TARGET_NAME})
|
||||
endif()
|
||||
|
||||
# Shutdown protobuf when unloading the front dynamic library
|
||||
# Shutdown protobuf when unloading the frontend dynamic library
|
||||
if(proto_files AND BUILD_SHARED_LIBS)
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE ov_protobuf_shutdown)
|
||||
endif()
|
||||
@@ -190,21 +190,8 @@ macro(ov_add_frontend)
|
||||
if(NOT BUILD_SHARED_LIBS)
|
||||
# override default function names
|
||||
target_compile_definitions(${TARGET_NAME} PRIVATE
|
||||
"-DGetFrontEndData=GetFrontEndData${OV_FRONTEND_NAME}"
|
||||
"-DGetAPIVersion=GetAPIVersion${OV_FRONTEND_NAME}")
|
||||
endif()
|
||||
|
||||
# enable LTO
|
||||
set_target_properties(${TARGET_NAME} PROPERTIES
|
||||
INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
|
||||
|
||||
if(OV_FRONTEND_SKIP_NCC_STYLE)
|
||||
# frontend's CMakeLists.txt must define its own custom 'ov_ncc_naming_style' step
|
||||
else()
|
||||
ov_ncc_naming_style(FOR_TARGET ${TARGET_NAME}
|
||||
SOURCE_DIRECTORY "${frontend_root_dir}/include"
|
||||
ADDITIONAL_INCLUDE_DIRECTORIES
|
||||
$<TARGET_PROPERTY:frontend_common::static,INTERFACE_INCLUDE_DIRECTORIES>)
|
||||
"-Dget_front_end_data=get_front_end_data_${OV_FRONTEND_NAME}"
|
||||
"-Dget_api_version=get_api_version_${OV_FRONTEND_NAME}")
|
||||
endif()
|
||||
|
||||
target_include_directories(${TARGET_NAME}
|
||||
@@ -217,8 +204,6 @@ macro(ov_add_frontend)
|
||||
ie_add_vs_version_file(NAME ${TARGET_NAME}
|
||||
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
|
||||
|
||||
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
|
||||
|
||||
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES})
|
||||
ov_add_library_version(${TARGET_NAME})
|
||||
@@ -255,10 +240,30 @@ macro(ov_add_frontend)
|
||||
endif()
|
||||
|
||||
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}
|
||||
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${flatbuffers_schema_files})
|
||||
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${proto_files} ${flatbuffers_schema_files})
|
||||
|
||||
# enable LTO
|
||||
set_target_properties(${TARGET_NAME} PROPERTIES
|
||||
INTERPROCEDURAL_OPTIMIZATION_RELEASE ${ENABLE_LTO})
|
||||
|
||||
if(OV_FRONTEND_SKIP_NCC_STYLE)
|
||||
# frontend's CMakeLists.txt must define its own custom 'ov_ncc_naming_style' step
|
||||
else()
|
||||
ov_ncc_naming_style(FOR_TARGET ${TARGET_NAME}
|
||||
SOURCE_DIRECTORIES "${frontend_root_dir}/include"
|
||||
"${frontend_root_dir}/src"
|
||||
ADDITIONAL_INCLUDE_DIRECTORIES
|
||||
$<TARGET_PROPERTY:${TARGET_NAME},INTERFACE_INCLUDE_DIRECTORIES>
|
||||
$<TARGET_PROPERTY:${TARGET_NAME},INCLUDE_DIRECTORIES>)
|
||||
endif()
|
||||
|
||||
add_dependencies(ov_frontends ${TARGET_NAME})
|
||||
|
||||
# must be called after all target_link_libraries
|
||||
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
|
||||
|
||||
# installation
|
||||
|
||||
if(NOT OV_FRONTEND_SKIP_INSTALL)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
# Note:
|
||||
|
||||
@@ -10,12 +10,12 @@
|
||||
|
||||
namespace {
|
||||
|
||||
using GetFrontEndDataFunc = void*();
|
||||
using GetAPIVersionFunc = ov::frontend::FrontEndVersion();
|
||||
using get_front_end_data_func = void*();
|
||||
using get_api_version_func = ov::frontend::FrontEndVersion();
|
||||
|
||||
struct Value {
|
||||
GetFrontEndDataFunc* m_dataFunc;
|
||||
GetAPIVersionFunc* m_versionFunc;
|
||||
get_front_end_data_func* m_dataFunc;
|
||||
get_api_version_func* m_versionFunc;
|
||||
};
|
||||
|
||||
using FrontendsStaticRegistry = std::vector<Value>;
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
if(UNIX AND ENABLE_ERROR_HIGHLIGHT)
|
||||
function(message)
|
||||
string(ASCII 27 ESC)
|
||||
set(RESET "${ESC}[m")
|
||||
set(RED "${ESC}[31;1m")
|
||||
set(YELLOW "${ESC}[33;1m")
|
||||
|
||||
list(GET ARGV 0 MessageType)
|
||||
list(REMOVE_AT ARGV 0)
|
||||
|
||||
foreach(arg IN LISTS ARGV)
|
||||
set(_msg "${_msg}${arg}")
|
||||
endforeach()
|
||||
|
||||
if(MessageType STREQUAL FATAL_ERROR OR MessageType STREQUAL SEND_ERROR)
|
||||
_message(${MessageType} "${RED}${_msg}${RESET}")
|
||||
elseif(MessageType STREQUAL WARNING)
|
||||
_message(${MessageType} "${YELLOW}${_msg}${RESET}")
|
||||
else()
|
||||
_message(${MessageType} "${_msg}")
|
||||
endif()
|
||||
endfunction()
|
||||
endif()
|
||||
@@ -63,6 +63,24 @@ function(ov_native_compile_external_project)
|
||||
set(ARG_NATIVE_SOURCE_SUBDIR SOURCE_SUBDIR ${ARG_NATIVE_SOURCE_SUBDIR})
|
||||
endif()
|
||||
|
||||
if(OV_GENERATOR_MULTI_CONFIG)
|
||||
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CONFIGURATION_TYPES=${CMAKE_DEFAULT_BUILD_TYPE}")
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_DEFAULT_BUILD_TYPE=${CMAKE_DEFAULT_BUILD_TYPE}")
|
||||
endif()
|
||||
else()
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}")
|
||||
endif()
|
||||
|
||||
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.21)
|
||||
if(DEFINED CMAKE_CXX_LINKER_LAUNCHER)
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_LINKER_LAUNCHER=${CMAKE_CXX_LINKER_LAUNCHER}")
|
||||
endif()
|
||||
if(DEFINED CMAKE_C_LINKER_LAUNCHER)
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_LINKER_LAUNCHER=${CMAKE_C_LINKER_LAUNCHER}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
ExternalProject_Add(${ARG_TARGET_NAME}
|
||||
# Directory Options
|
||||
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
@@ -75,13 +93,10 @@ function(ov_native_compile_external_project)
|
||||
CMAKE_ARGS
|
||||
"-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}"
|
||||
"-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}"
|
||||
"-DCMAKE_CXX_LINKER_LAUNCHER=${CMAKE_CXX_LINKER_LAUNCHER}"
|
||||
"-DCMAKE_C_LINKER_LAUNCHER=${CMAKE_C_LINKER_LAUNCHER}"
|
||||
"-DCMAKE_CXX_FLAGS=${compile_flags}"
|
||||
"-DCMAKE_C_FLAGS=${compile_flags}"
|
||||
"-DCMAKE_POLICY_DEFAULT_CMP0069=NEW"
|
||||
"-DCMAKE_INSTALL_PREFIX=${ARG_NATIVE_INSTALL_DIR}"
|
||||
"-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}"
|
||||
${ARG_CMAKE_ARGS}
|
||||
CMAKE_GENERATOR "${CMAKE_GENERATOR}"
|
||||
${ARG_NATIVE_SOURCE_SUBDIR}
|
||||
|
||||
@@ -112,13 +112,13 @@ endif()
|
||||
|
||||
#
|
||||
# ov_ncc_naming_style(FOR_TARGET target_name
|
||||
# SOURCE_DIRECTORY dir
|
||||
# [SOURCE_DIRECTORIES dir1 dir2 ...]
|
||||
# [STYLE_FILE style_file.style]
|
||||
# [ADDITIONAL_INCLUDE_DIRECTORIES dir1 dir2 ..]
|
||||
# [DEFINITIONS def1 def2 ..])
|
||||
#
|
||||
# FOR_TARGET - name of the target
|
||||
# SOURCE_DIRECTORY - directory to check sources from
|
||||
# SOURCE_DIRECTORIES - directory to check sources from
|
||||
# STYLE_FILE - path to the specific style file
|
||||
# ADDITIONAL_INCLUDE_DIRECTORIES - additional include directories used in checked headers
|
||||
# DEFINITIONS - additional definitions passed to preprocessor stage
|
||||
@@ -129,9 +129,9 @@ function(ov_ncc_naming_style)
|
||||
endif()
|
||||
|
||||
cmake_parse_arguments(NCC_STYLE "FAIL"
|
||||
"FOR_TARGET;SOURCE_DIRECTORY;STYLE_FILE" "ADDITIONAL_INCLUDE_DIRECTORIES;DEFINITIONS" ${ARGN})
|
||||
"FOR_TARGET;STYLE_FILE" "SOURCE_DIRECTORIES;ADDITIONAL_INCLUDE_DIRECTORIES;DEFINITIONS" ${ARGN})
|
||||
|
||||
foreach(var FOR_TARGET SOURCE_DIRECTORY)
|
||||
foreach(var FOR_TARGET SOURCE_DIRECTORIES)
|
||||
if(NOT DEFINED NCC_STYLE_${var})
|
||||
message(FATAL_ERROR "${var} is not defined in ov_ncc_naming_style function")
|
||||
endif()
|
||||
@@ -141,18 +141,18 @@ function(ov_ncc_naming_style)
|
||||
set(NCC_STYLE_STYLE_FILE ${ncc_style_dir}/openvino.style)
|
||||
endif()
|
||||
|
||||
file(GLOB_RECURSE sources
|
||||
RELATIVE "${NCC_STYLE_SOURCE_DIRECTORY}"
|
||||
"${NCC_STYLE_SOURCE_DIRECTORY}/*.hpp"
|
||||
"${NCC_STYLE_SOURCE_DIRECTORY}/*.cpp")
|
||||
foreach(source_dir IN LISTS NCC_STYLE_SOURCE_DIRECTORIES)
|
||||
file(GLOB_RECURSE local_sources "${source_dir}/*.hpp" "${source_dir}/*.cpp")
|
||||
list(APPEND sources ${local_sources})
|
||||
endforeach()
|
||||
|
||||
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
|
||||
# without it sources with same name from different directories will map to same .ncc_style target
|
||||
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})
|
||||
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES ${NCC_STYLE_SOURCE_DIRECTORIES})
|
||||
|
||||
foreach(source IN LISTS sources)
|
||||
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source}.ncc_style")
|
||||
set(full_source_path "${NCC_STYLE_SOURCE_DIRECTORY}/${source}")
|
||||
foreach(source_file IN LISTS sources)
|
||||
get_filename_component(source_dir "${source_file}" DIRECTORY)
|
||||
file(RELATIVE_PATH source_dir_rel "${CMAKE_SOURCE_DIR}" "${source_dir}")
|
||||
get_filename_component(source_name "${source_file}" NAME)
|
||||
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source_name}.ncc_style")
|
||||
|
||||
add_custom_command(
|
||||
OUTPUT
|
||||
@@ -161,7 +161,7 @@ function(ov_ncc_naming_style)
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "PYTHON_EXECUTABLE=${PYTHON_EXECUTABLE}"
|
||||
-D "NCC_PY_SCRIPT=${ncc_script_py}"
|
||||
-D "INPUT_FILE=${full_source_path}"
|
||||
-D "INPUT_FILE=${source_file}"
|
||||
-D "OUTPUT_FILE=${output_file}"
|
||||
-D "DEFINITIONS=${NCC_STYLE_DEFINITIONS}"
|
||||
-D "CLANG_LIB_PATH=${libclang_location}"
|
||||
@@ -170,12 +170,12 @@ function(ov_ncc_naming_style)
|
||||
-D "EXPECTED_FAIL=${NCC_STYLE_FAIL}"
|
||||
-P "${ncc_style_dir}/ncc_run.cmake"
|
||||
DEPENDS
|
||||
"${full_source_path}"
|
||||
"${source_file}"
|
||||
"${ncc_style_dir}/openvino.style"
|
||||
"${ncc_script_py}"
|
||||
"${ncc_style_dir}/ncc_run.cmake"
|
||||
COMMENT
|
||||
"[ncc naming style] ${source}"
|
||||
"[ncc naming style] ${source_dir_rel}/${source_name}"
|
||||
VERBATIM)
|
||||
list(APPEND output_files ${output_file})
|
||||
endforeach()
|
||||
@@ -191,6 +191,6 @@ endfunction()
|
||||
|
||||
if(TARGET ncc_all)
|
||||
ov_ncc_naming_style(FOR_TARGET ncc_all
|
||||
SOURCE_DIRECTORY "${ncc_style_dir}/self_check"
|
||||
SOURCE_DIRECTORIES "${ncc_style_dir}/self_check"
|
||||
FAIL)
|
||||
endif()
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# custom OpenVINO values
|
||||
CppMethod: '^(operator\W+|[a-z_\d]+|signaling_NaN|quiet_NaN)$'
|
||||
ClassName: '^([A-Z][\w]+|b?float16|numeric_limits|ngraph_error|stopwatch|unsupported_op)$'
|
||||
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair)$'
|
||||
StructName: '^([A-Z][\w]+|element_type_traits|hash|oi_pair|stat)$'
|
||||
FunctionName: '^(operator\W+|[a-z_\d]+)|PrintTo$'
|
||||
Namespace: '^([a-z\d_]*|InferenceEngine)$'
|
||||
NamespaceAlias: '^([a-z\d_]+|InferenceEngine)$'
|
||||
@@ -12,7 +12,7 @@ TemplateNonTypeParameter: '^\w*$'
|
||||
ClassTemplate: '^([A-Z][\w]+|element_type_traits)$'
|
||||
TemplateTypeParameter: '^\w*$'
|
||||
ParameterName: '^\w*$'
|
||||
FunctionTemplate: '^(operator.+|[\w]+|Impl<.*>)$'
|
||||
FunctionTemplate: '^(operator.+|[\w]+|SoPtr.+|Impl<.*>)$'
|
||||
TypeAliasName: '^\w+$'
|
||||
VariableReference: '^\w+$'
|
||||
|
||||
@@ -27,7 +27,7 @@ CxxDynamicCastExpression: '^.*$'
|
||||
# not needed values
|
||||
ClassTemplatePartialSpecialization: '^.*$'
|
||||
ConversionFunction: '^.*$'
|
||||
UsingDirective: 'XXXX'
|
||||
UsingDirective: '^.*$'
|
||||
ClassAccessSpecifier: '^.*$' # looks like can be fixed
|
||||
TypeReference: '^.*$' # looks like can be fixed
|
||||
CxxBaseSpecifier: '^.*$' # looks like can be fixed
|
||||
|
||||
@@ -194,7 +194,7 @@ macro(ie_cpack)
|
||||
set(CPACK_STRIP_FILES ON)
|
||||
endif()
|
||||
|
||||
# TODO: replace with openvino
|
||||
# TODO: replace with openvino and handle multi-config generators case
|
||||
if(WIN32)
|
||||
set(CPACK_PACKAGE_NAME inference-engine_${CMAKE_BUILD_TYPE})
|
||||
else()
|
||||
@@ -202,6 +202,7 @@ macro(ie_cpack)
|
||||
endif()
|
||||
|
||||
set(CPACK_PACKAGE_VERSION "${OpenVINO_VERSION}")
|
||||
# build version can be empty in case we are running cmake out of git repository
|
||||
if(NOT OpenVINO_VERSION_BUILD STREQUAL "000")
|
||||
set(CPACK_PACKAGE_VERSION "${CPACK_PACKAGE_VERSION}.${OpenVINO_VERSION_BUILD}")
|
||||
endif()
|
||||
|
||||
@@ -10,6 +10,24 @@ endif()
|
||||
|
||||
set(rpmlint_passed ON)
|
||||
|
||||
execute_process(COMMAND "${rpmlint_PROGRAM}" --version
|
||||
RESULT_VARIABLE rpmlint_exit_code
|
||||
OUTPUT_VARIABLE rpmlint_version)
|
||||
|
||||
if(NOT rpmlint_exit_code EQUAL 0)
|
||||
message(FATAL_ERROR "Failed to get ${rpmlint_PROGRAM} version. Output is '${rpmlint_version}'")
|
||||
endif()
|
||||
|
||||
if(rpmlint_version MATCHES "([0-9]+)\.([0-9]+)")
|
||||
set(rpmlint_version "${CMAKE_MATCH_1}.${CMAKE_MATCH_2}")
|
||||
else()
|
||||
message(FATAL_ERROR "Failed to parse rpmlint version '${rpmlint_version}'")
|
||||
endif()
|
||||
|
||||
if(rpmlint_version VERSION_GREATER_EQUAL 2.0)
|
||||
set(rpmlint_has_strict_option ON)
|
||||
endif()
|
||||
|
||||
foreach(rpm_file IN LISTS CPACK_PACKAGE_FILES)
|
||||
get_filename_component(rpm_name "${rpm_file}" NAME)
|
||||
get_filename_component(dir_name "${rpm_file}" DIRECTORY)
|
||||
@@ -17,20 +35,25 @@ foreach(rpm_file IN LISTS CPACK_PACKAGE_FILES)
|
||||
|
||||
set(rpmlint_overrides "${dir_name}/${rpm_name}.rpmlintrc")
|
||||
if(EXISTS "${rpmlint_overrides}")
|
||||
set(file_option --file "${rpmlint_overrides}")
|
||||
set(rpmlint_options --file "${rpmlint_overrides}")
|
||||
endif()
|
||||
if(rpmlint_has_strict_option)
|
||||
list(APPEND rpmlint_options --strict)
|
||||
endif()
|
||||
|
||||
execute_process(COMMAND "${rpmlint_PROGRAM}" --strict ${file_option} ${rpm_file}
|
||||
execute_process(COMMAND "${rpmlint_PROGRAM}" ${rpmlint_options} ${rpm_file}
|
||||
RESULT_VARIABLE rpmlint_exit_code
|
||||
OUTPUT_VARIABLE rpmlint_output)
|
||||
|
||||
if(NOT rpmlint_exit_code EQUAL 0)
|
||||
if(NOT rpmlint_exit_code EQUAL 0 OR NOT rpmlint_has_strict_option)
|
||||
message("Package ${rpm_name}:")
|
||||
message("${rpmlint_output}")
|
||||
set(rpmlint_passed OFF)
|
||||
if(rpmlint_has_strict_option)
|
||||
set(rpmlint_passed OFF)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
unset(file_option)
|
||||
unset(rpmlint_options)
|
||||
endforeach()
|
||||
|
||||
if(NOT rpmlint_passed)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
foreach(var IE_DEVICE_MAPPING IE_PLUGINS_HPP_HEADER IE_PLUGINS_HPP_HEADER_IN)
|
||||
foreach(var OV_DEVICE_MAPPING BUILD_SHARED_LIBS OV_PLUGINS_HPP_HEADER OV_PLUGINS_HPP_HEADER_IN)
|
||||
if(NOT DEFINED ${var})
|
||||
message(FATAL_ERROR "${var} is required, but not defined")
|
||||
endif()
|
||||
@@ -10,29 +10,15 @@ endforeach()
|
||||
|
||||
# configure variables
|
||||
|
||||
set(IE_PLUGINS_DECLARATIONS "")
|
||||
set(IE_PLUGINS_MAP_DEFINITION
|
||||
set(OV_PLUGINS_DECLARATIONS "")
|
||||
set(OV_PLUGINS_MAP_DEFINITION
|
||||
" static const std::map<Key, Value> plugins_hpp = {")
|
||||
|
||||
foreach(dev_map IN LISTS IE_DEVICE_MAPPING)
|
||||
foreach(dev_map IN LISTS OV_DEVICE_MAPPING)
|
||||
string(REPLACE ":" ";" dev_map "${dev_map}")
|
||||
list(GET dev_map 0 mapped_dev_name)
|
||||
list(GET dev_map 1 actual_dev_name)
|
||||
|
||||
# common
|
||||
set(_IE_CREATE_PLUGIN_FUNC "CreatePluginEngine${actual_dev_name}")
|
||||
set(_IE_CREATE_EXTENSION_FUNC "CreateExtensionShared${actual_dev_name}")
|
||||
|
||||
# declarations
|
||||
set(IE_PLUGINS_DECLARATIONS "${IE_PLUGINS_DECLARATIONS}
|
||||
IE_DEFINE_PLUGIN_CREATE_FUNCTION_DECLARATION(${_IE_CREATE_PLUGIN_FUNC});")
|
||||
if(${actual_dev_name}_AS_EXTENSION)
|
||||
set(IE_PLUGINS_DECLARATIONS "${IE_PLUGINS_DECLARATIONS}
|
||||
IE_DEFINE_EXTENSION_CREATE_FUNCTION_DECLARATION(${_IE_CREATE_EXTENSION_FUNC});")
|
||||
else()
|
||||
set(_IE_CREATE_EXTENSION_FUNC "nullptr")
|
||||
endif()
|
||||
|
||||
# definitions
|
||||
set(dev_config "{")
|
||||
if(${mapped_dev_name}_CONFIG)
|
||||
@@ -48,11 +34,31 @@ IE_DEFINE_EXTENSION_CREATE_FUNCTION_DECLARATION(${_IE_CREATE_EXTENSION_FUNC});")
|
||||
endif()
|
||||
set(dev_config "${dev_config}}")
|
||||
|
||||
set(IE_PLUGINS_MAP_DEFINITION "${IE_PLUGINS_MAP_DEFINITION}
|
||||
{ \"${mapped_dev_name}\", Value { ${_IE_CREATE_PLUGIN_FUNC}, ${_IE_CREATE_EXTENSION_FUNC}, ${dev_config} } },")
|
||||
|
||||
if(NOT BUILD_SHARED_LIBS)
|
||||
# common
|
||||
set(_OV_CREATE_PLUGIN_FUNC "CreatePluginEngine${actual_dev_name}")
|
||||
set(_OV_CREATE_EXTENSION_FUNC "CreateExtensionShared${actual_dev_name}")
|
||||
|
||||
# declarations
|
||||
set(OV_PLUGINS_DECLARATIONS "${OV_PLUGINS_DECLARATIONS}
|
||||
IE_DEFINE_PLUGIN_CREATE_FUNCTION_DECLARATION(${_OV_CREATE_PLUGIN_FUNC});")
|
||||
if(${actual_dev_name}_AS_EXTENSION)
|
||||
set(OV_PLUGINS_DECLARATIONS "${OV_PLUGINS_DECLARATIONS}
|
||||
IE_DEFINE_EXTENSION_CREATE_FUNCTION_DECLARATION(${_OV_CREATE_EXTENSION_FUNC});")
|
||||
else()
|
||||
set(_OV_CREATE_EXTENSION_FUNC "nullptr")
|
||||
endif()
|
||||
|
||||
set(OV_PLUGINS_MAP_DEFINITION "${OV_PLUGINS_MAP_DEFINITION}
|
||||
{ \"${mapped_dev_name}\", Value { ${_OV_CREATE_PLUGIN_FUNC}, ${_OV_CREATE_EXTENSION_FUNC}, ${dev_config} } },")
|
||||
else()
|
||||
set(OV_PLUGINS_MAP_DEFINITION "${OV_PLUGINS_MAP_DEFINITION}
|
||||
{ \"${mapped_dev_name}\", Value { \"${actual_dev_name}\", ${dev_config} } },")
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
set(IE_PLUGINS_MAP_DEFINITION "${IE_PLUGINS_MAP_DEFINITION}
|
||||
set(OV_PLUGINS_MAP_DEFINITION "${OV_PLUGINS_MAP_DEFINITION}
|
||||
};\n")
|
||||
|
||||
configure_file("${IE_PLUGINS_HPP_HEADER_IN}" "${IE_PLUGINS_HPP_HEADER}" @ONLY)
|
||||
configure_file("${OV_PLUGINS_HPP_HEADER_IN}" "${OV_PLUGINS_HPP_HEADER}" @ONLY)
|
||||
|
||||
@@ -49,10 +49,6 @@ function(ie_add_plugin)
|
||||
# create and configure target
|
||||
|
||||
if(NOT IE_PLUGIN_PSEUDO_PLUGIN_FOR)
|
||||
if(IE_PLUGIN_VERSION_DEFINES_FOR)
|
||||
addVersionDefines(${IE_PLUGIN_VERSION_DEFINES_FOR} CI_BUILD_NUMBER)
|
||||
endif()
|
||||
|
||||
set(input_files ${IE_PLUGIN_SOURCES})
|
||||
foreach(obj_lib IN LISTS IE_PLUGIN_OBJECT_LIBRARIES)
|
||||
list(APPEND input_files $<TARGET_OBJECTS:${obj_lib}>)
|
||||
@@ -67,6 +63,10 @@ function(ie_add_plugin)
|
||||
|
||||
add_library(${IE_PLUGIN_NAME} ${library_type} ${input_files})
|
||||
|
||||
if(IE_PLUGIN_VERSION_DEFINES_FOR)
|
||||
ov_add_version_defines(${IE_PLUGIN_VERSION_DEFINES_FOR} ${IE_PLUGIN_NAME})
|
||||
endif()
|
||||
|
||||
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
|
||||
if(NOT BUILD_SHARED_LIBS)
|
||||
# to distinguish functions creating plugin objects
|
||||
@@ -113,7 +113,7 @@ function(ie_add_plugin)
|
||||
if(IE_PLUGIN_PSEUDO_DEVICE)
|
||||
set(plugin_hidden HIDDEN)
|
||||
endif()
|
||||
ie_cpack_add_component(${install_component}
|
||||
ie_cpack_add_component(${install_component}
|
||||
DISPLAY_NAME "${IE_PLUGIN_DEVICE_NAME} runtime"
|
||||
DESCRIPTION "${IE_PLUGIN_DEVICE_NAME} runtime"
|
||||
${plugin_hidden}
|
||||
@@ -227,16 +227,18 @@ macro(ie_register_plugins_dynamic)
|
||||
|
||||
# Combine all <device_name>.xml files into plugins.xml
|
||||
|
||||
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
|
||||
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
|
||||
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
|
||||
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
|
||||
COMMENT
|
||||
"Registering plugins to plugins.xml config file"
|
||||
VERBATIM)
|
||||
if(ENABLE_PLUGINS_XML)
|
||||
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
|
||||
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
|
||||
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
|
||||
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
|
||||
COMMENT
|
||||
"Registering plugins to plugins.xml config file"
|
||||
VERBATIM)
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
#
|
||||
@@ -279,13 +281,9 @@ function(ie_target_link_plugins TARGET_NAME)
|
||||
endfunction()
|
||||
|
||||
#
|
||||
# ie_generate_plugins_hpp()
|
||||
# ov_generate_plugins_hpp()
|
||||
#
|
||||
function(ie_generate_plugins_hpp)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
return()
|
||||
endif()
|
||||
|
||||
function(ov_generate_plugins_hpp)
|
||||
set(device_mapping)
|
||||
set(device_configs)
|
||||
set(as_extension)
|
||||
@@ -296,17 +294,23 @@ function(ie_generate_plugins_hpp)
|
||||
message(FATAL_ERROR "Unexpected error, please, contact developer of this script")
|
||||
endif()
|
||||
|
||||
# create device mapping: preudo device => actual device
|
||||
# create device mapping: pseudo device => actual device
|
||||
list(GET name 0 device_name)
|
||||
if(${device_name}_PSEUDO_PLUGIN_FOR)
|
||||
list(APPEND device_mapping "${device_name}:${${device_name}_PSEUDO_PLUGIN_FOR}")
|
||||
if(BUILD_SHARED_LIBS)
|
||||
list(GET name 1 library_name)
|
||||
ie_plugin_get_file_name(${library_name} library_name)
|
||||
list(APPEND device_mapping "${device_name}:${library_name}")
|
||||
else()
|
||||
list(APPEND device_mapping "${device_name}:${device_name}")
|
||||
endif()
|
||||
if(${device_name}_PSEUDO_PLUGIN_FOR)
|
||||
list(APPEND device_mapping "${device_name}:${${device_name}_PSEUDO_PLUGIN_FOR}")
|
||||
else()
|
||||
list(APPEND device_mapping "${device_name}:${device_name}")
|
||||
endif()
|
||||
|
||||
# register plugin as extension
|
||||
if(${device_name}_AS_EXTENSION)
|
||||
list(APPEND as_extension -D "${device_name}_AS_EXTENSION=ON")
|
||||
# register plugin as extension
|
||||
if(${device_name}_AS_EXTENSION)
|
||||
list(APPEND as_extension -D "${device_name}_AS_EXTENSION=ON")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# add default plugin config options
|
||||
@@ -317,21 +321,22 @@ function(ie_generate_plugins_hpp)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# add plugins to libraries including ie_plugins.hpp
|
||||
# add plugins to libraries including ov_plugins.hpp
|
||||
ie_target_link_plugins(openvino)
|
||||
if(TARGET inference_engine_s)
|
||||
ie_target_link_plugins(inference_engine_s)
|
||||
endif()
|
||||
|
||||
set(ie_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ie_plugins.hpp")
|
||||
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ov_plugins.hpp")
|
||||
set(plugins_hpp_in "${IEDevScripts_DIR}/plugins/plugins.hpp.in")
|
||||
|
||||
add_custom_command(OUTPUT "${ie_plugins_hpp}"
|
||||
add_custom_command(OUTPUT "${ov_plugins_hpp}"
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "IE_DEVICE_MAPPING=${device_mapping}"
|
||||
-D "IE_PLUGINS_HPP_HEADER_IN=${plugins_hpp_in}"
|
||||
-D "IE_PLUGINS_HPP_HEADER=${ie_plugins_hpp}"
|
||||
-D "BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS}"
|
||||
-D "OV_DEVICE_MAPPING=${device_mapping}"
|
||||
-D "OV_PLUGINS_HPP_HEADER_IN=${plugins_hpp_in}"
|
||||
-D "OV_PLUGINS_HPP_HEADER=${ov_plugins_hpp}"
|
||||
${device_configs}
|
||||
${as_extension}
|
||||
-P "${IEDevScripts_DIR}/plugins/create_plugins_hpp.cmake"
|
||||
@@ -339,28 +344,11 @@ function(ie_generate_plugins_hpp)
|
||||
"${plugins_hpp_in}"
|
||||
"${IEDevScripts_DIR}/plugins/create_plugins_hpp.cmake"
|
||||
COMMENT
|
||||
"Generate ie_plugins.hpp for static build"
|
||||
"Generate ov_plugins.hpp for build"
|
||||
VERBATIM)
|
||||
|
||||
# for some reason dependency on source files does not work
|
||||
# so, we have to use explicit target and make it dependency for inference_engine
|
||||
add_custom_target(_ie_plugins_hpp DEPENDS ${ie_plugins_hpp})
|
||||
add_dependencies(inference_engine_obj _ie_plugins_hpp)
|
||||
|
||||
# add dependency for object files
|
||||
get_target_property(sources inference_engine_obj SOURCES)
|
||||
foreach(source IN LISTS sources)
|
||||
if("${source}" MATCHES "\\$\\<TARGET_OBJECTS\\:([A-Za-z0-9_]*)\\>")
|
||||
# object library
|
||||
set(obj_library ${CMAKE_MATCH_1})
|
||||
get_target_property(obj_sources ${obj_library} SOURCES)
|
||||
list(APPEND all_sources ${obj_sources})
|
||||
else()
|
||||
# usual source
|
||||
list(APPEND all_sources ${source})
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# add dependency on header file generation for all inference_engine source files
|
||||
set_source_files_properties(${all_sources} PROPERTIES OBJECT_DEPENDS ${ie_plugins_hpp})
|
||||
add_custom_target(_ov_plugins_hpp DEPENDS ${ov_plugins_hpp})
|
||||
add_dependencies(inference_engine_obj _ov_plugins_hpp)
|
||||
endfunction()
|
||||
|
||||
@@ -4,10 +4,14 @@
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <map>
|
||||
#include <string>
|
||||
|
||||
#ifdef OPENVINO_STATIC_LIBRARY
|
||||
|
||||
#include "cpp_interfaces/interface/ie_iplugin_internal.hpp"
|
||||
|
||||
namespace {
|
||||
@IE_PLUGINS_DECLARATIONS@
|
||||
@OV_PLUGINS_DECLARATIONS@
|
||||
|
||||
struct Value {
|
||||
InferenceEngine::CreatePluginEngineFunc * m_create_plugin_func;
|
||||
@@ -15,12 +19,20 @@ struct Value {
|
||||
std::map<std::string, std::string> m_default_config;
|
||||
};
|
||||
|
||||
#else
|
||||
|
||||
struct Value {
|
||||
std::string m_plugin_path;
|
||||
std::map<std::string, std::string> m_default_config;
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
using Key = std::string;
|
||||
using PluginsStaticRegistry = std::map<Key, Value>;
|
||||
|
||||
const std::map<Key, Value> getStaticPluginsRegistry() {
|
||||
@IE_PLUGINS_MAP_DEFINITION@
|
||||
|
||||
inline const std::map<Key, Value> getCompiledPluginsRegistry() {
|
||||
@OV_PLUGINS_MAP_DEFINITION@
|
||||
return plugins_hpp;
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
@@ -97,7 +97,11 @@ function(ov_check_pip_packages)
|
||||
|
||||
if(PYTHONINTERP_FOUND)
|
||||
execute_process(
|
||||
COMMAND ${PYTHON_EXECUTABLE} -c "import pkg_resources ; pkg_resources.require(open('${ARG_REQUIREMENTS_FILE}', mode='r'))"
|
||||
COMMAND ${PYTHON_EXECUTABLE} -c "
|
||||
from check_python_requirements import check_python_requirements ;
|
||||
check_python_requirements('${ARG_REQUIREMENTS_FILE}') ;
|
||||
"
|
||||
WORKING_DIRECTORY "${IEDevScripts_DIR}"
|
||||
RESULT_VARIABLE EXIT_CODE
|
||||
OUTPUT_VARIABLE OUTPUT_TEXT
|
||||
ERROR_VARIABLE ERROR_TEXT)
|
||||
|
||||
@@ -20,7 +20,7 @@ if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
|
||||
set(arch_flag X86_64)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
|
||||
set(arch_flag X86)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*)")
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
|
||||
set(arch_flag AARCH64)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
|
||||
set(arch_flag ARM)
|
||||
@@ -31,8 +31,8 @@ endif()
|
||||
set(HOST_${arch_flag} ON)
|
||||
|
||||
macro(_ie_process_msvc_generator_platform arch_flag)
|
||||
# if cmake -A <ARM|ARM64> is passed
|
||||
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64" OR CMAKE_SYSTEM_PROCESSOR STREQUAL "ARM64")
|
||||
# if cmake -A <ARM|ARM64|x64|Win32> is passed
|
||||
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
|
||||
set(AARCH64 ON)
|
||||
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM")
|
||||
set(ARM ON)
|
||||
|
||||
@@ -185,6 +185,46 @@ macro (addVersionDefines FILE)
|
||||
unset(__version_file)
|
||||
endmacro()
|
||||
|
||||
macro (ov_add_version_defines FILE TARGET)
|
||||
set(__version_file ${FILE})
|
||||
if(NOT IS_ABSOLUTE ${__version_file})
|
||||
set(__version_file "${CMAKE_CURRENT_SOURCE_DIR}/${__version_file}")
|
||||
endif()
|
||||
if(NOT EXISTS ${__version_file})
|
||||
message(FATAL_ERROR "${FILE} does not exists in current source directory")
|
||||
endif()
|
||||
_remove_source_from_target(${TARGET} ${FILE})
|
||||
_remove_source_from_target(${TARGET} ${__version_file})
|
||||
if (BUILD_SHARED_LIBS)
|
||||
add_library(${TARGET}_version OBJECT ${__version_file})
|
||||
else()
|
||||
add_library(${TARGET}_version STATIC ${__version_file})
|
||||
endif()
|
||||
if(SUGGEST_OVERRIDE_SUPPORTED)
|
||||
set_source_files_properties(${__version_file}
|
||||
PROPERTIES COMPILE_OPTIONS -Wno-suggest-override)
|
||||
endif()
|
||||
|
||||
target_compile_definitions(${TARGET}_version PRIVATE
|
||||
CI_BUILD_NUMBER=\"${CI_BUILD_NUMBER}\"
|
||||
$<TARGET_PROPERTY:${TARGET},INTERFACE_COMPILE_DEFINITIONS>
|
||||
$<TARGET_PROPERTY:${TARGET},COMPILE_DEFINITIONS>)
|
||||
target_include_directories(${TARGET}_version PRIVATE
|
||||
$<TARGET_PROPERTY:${TARGET},INTERFACE_INCLUDE_DIRECTORIES>
|
||||
$<TARGET_PROPERTY:${TARGET},INCLUDE_DIRECTORIES>)
|
||||
target_link_libraries(${TARGET}_version PRIVATE
|
||||
$<TARGET_PROPERTY:${TARGET},LINK_LIBRARIES>)
|
||||
target_compile_options(${TARGET}_version PRIVATE
|
||||
$<TARGET_PROPERTY:${TARGET},INTERFACE_COMPILE_OPTIONS>
|
||||
$<TARGET_PROPERTY:${TARGET},COMPILE_OPTIONS>)
|
||||
set_target_properties(${TARGET}_version
|
||||
PROPERTIES INTERPROCEDURAL_OPTIMIZATION_RELEASE
|
||||
$<TARGET_PROPERTY:${TARGET},INTERPROCEDURAL_OPTIMIZATION_RELEASE>)
|
||||
|
||||
target_sources(${TARGET} PRIVATE $<TARGET_OBJECTS:${TARGET}_version>)
|
||||
unset(__version_file)
|
||||
endmacro()
|
||||
|
||||
function(ov_add_library_version library)
|
||||
if(NOT DEFINED OpenVINO_SOVERSION)
|
||||
message(FATAL_ERROR "Internal error: OpenVINO_SOVERSION is not defined")
|
||||
|
||||
@@ -169,9 +169,9 @@ ov_generate_dev_package_config()
|
||||
# with all imported developer targets
|
||||
register_extra_modules()
|
||||
|
||||
# for static libraries case we need to generate final ie_plugins.hpp
|
||||
# for static libraries case we need to generate final ov_plugins.hpp
|
||||
# with all the information about plugins
|
||||
ie_generate_plugins_hpp()
|
||||
ov_generate_plugins_hpp()
|
||||
|
||||
# used for static build
|
||||
ov_generate_frontends_hpp()
|
||||
|
||||
@@ -14,7 +14,13 @@ ie_option (ENABLE_COMPILE_TOOL "Enables compile_tool" ON)
|
||||
|
||||
ie_option (ENABLE_STRICT_DEPENDENCIES "Skip configuring \"convinient\" dependencies for efficient parallel builds" ON)
|
||||
|
||||
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ON "X86_64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
|
||||
if(X86_64)
|
||||
set(ENABLE_INTEL_GPU_DEFAULT ON)
|
||||
else()
|
||||
set(ENABLE_INTEL_GPU_DEFAULT OFF)
|
||||
endif()
|
||||
|
||||
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
|
||||
|
||||
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0))
|
||||
# oneDNN doesn't support old compilers and android builds for now, so we'll
|
||||
@@ -41,8 +47,6 @@ In case SELECTIVE_BUILD is enabled, the SELECTIVE_BUILD_STAT variable should con
|
||||
Usage: -DSELECTIVE_BUILD=ON -DSELECTIVE_BUILD_STAT=/path/*.csv" OFF
|
||||
ALLOWED_VALUES ON OFF COLLECT)
|
||||
|
||||
ie_option(ENABLE_ERROR_HIGHLIGHT "Highlight errors and warnings during compile time" ON)
|
||||
|
||||
ie_option (ENABLE_DOCS "Build docs using Doxygen" OFF)
|
||||
|
||||
find_package(PkgConfig QUIET)
|
||||
@@ -90,6 +94,8 @@ ie_option (ENABLE_HETERO "Enables Hetero Device Plugin" ON)
|
||||
|
||||
ie_option (ENABLE_TEMPLATE "Enable template plugin" ON)
|
||||
|
||||
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "NOT BUILD_SHARED_LIBS" OFF)
|
||||
|
||||
ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF "ENABLE_TESTS;ENABLE_GAPI_PREPROCESSING" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
|
||||
@@ -148,13 +154,16 @@ ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
|
||||
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Use system protobuf" OFF
|
||||
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
|
||||
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
|
||||
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Use system flatbuffers" ON
|
||||
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
|
||||
|
||||
ie_dependent_option(ENABLE_OV_CORE_UNIT_TESTS "Enables OpenVINO core unit tests" ON "ENABLE_TESTS" OFF)
|
||||
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
|
||||
"ENABLE_OV_TF_FRONTEND" ON)
|
||||
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Enables use of system protobuf" OFF
|
||||
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
|
||||
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ON
|
||||
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
|
||||
ie_dependent_option(ENABLE_SYSTEM_SNAPPY "Enables use of system version of snappy" OFF "ENABLE_SNAPPY_COMPRESSION;BUILD_SHARED_LIBS" OFF)
|
||||
|
||||
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)
|
||||
|
||||
if(NOT BUILD_SHARED_LIBS AND ENABLE_OV_TF_FRONTEND)
|
||||
|
||||
@@ -52,6 +52,8 @@ macro(ov_cpack_settings)
|
||||
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
|
||||
# see ticket # 82605
|
||||
NOT item STREQUAL "gna" AND
|
||||
# don't install Intel OpenMP during debian
|
||||
NOT item STREQUAL "omp" AND
|
||||
# even for case of system TBB we have installation rules for wheels packages
|
||||
# so, need to skip this explicitly
|
||||
NOT item MATCHES "^tbb(_dev)?$" AND
|
||||
|
||||
@@ -38,6 +38,8 @@ macro(ov_cpack_settings)
|
||||
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
|
||||
# see ticket # 82605
|
||||
NOT item STREQUAL "gna" AND
|
||||
# don't install Intel OpenMP during rpm
|
||||
NOT item STREQUAL "omp" AND
|
||||
# even for case of system TBB we have installation rules for wheels packages
|
||||
# so, need to skip this explicitly
|
||||
NOT item MATCHES "^tbb(_dev)?$" AND
|
||||
|
||||
@@ -16,7 +16,8 @@ set(ie_options "@IE_OPTIONS@")
|
||||
list(APPEND ie_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
|
||||
CMAKE_CXX_LINKER_LAUNCHER CMAKE_C_LINKER_LAUNCHER
|
||||
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX
|
||||
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET)
|
||||
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET
|
||||
CMAKE_CONFIGURATION_TYPES CMAKE_DEFAULT_BUILD_TYPE)
|
||||
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
|
||||
|
||||
message(STATUS "The following CMake options are exported from Inference Engine Developer package")
|
||||
|
||||
@@ -14,7 +14,8 @@ set(ov_options "@IE_OPTIONS@")
|
||||
list(APPEND ov_options CMAKE_CXX_COMPILER_LAUNCHER CMAKE_C_COMPILER_LAUNCHER
|
||||
CMAKE_CXX_LINKER_LAUNCHER CMAKE_C_LINKER_LAUNCHER
|
||||
CMAKE_BUILD_TYPE CMAKE_SKIP_RPATH CMAKE_INSTALL_PREFIX
|
||||
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET)
|
||||
CMAKE_OSX_ARCHITECTURES CMAKE_OSX_DEPLOYMENT_TARGET
|
||||
CMAKE_CONFIGURATION_TYPES CMAKE_DEFAULT_BUILD_TYPE)
|
||||
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
|
||||
|
||||
message(STATUS "The following CMake options are exported from OpenVINO Developer package")
|
||||
@@ -27,6 +28,9 @@ foreach(option IN LISTS ov_options)
|
||||
endforeach()
|
||||
message(" ")
|
||||
|
||||
# activate generation of plugins.xml
|
||||
set(ENABLE_PLUGINS_XML ON)
|
||||
|
||||
# for samples in 3rd party projects
|
||||
if(ENABLE_SAMPLES)
|
||||
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
|
||||
|
||||
@@ -9,12 +9,9 @@
|
||||
Run and Deploy Locally <openvino_deployment_guide>
|
||||
Deploy via Model Serving <ovms_what_is_openvino_model_server>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
|
||||
|
||||
@sphinxdirective
|
||||
.. panels::
|
||||
|
||||
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
|
||||
@@ -30,8 +27,7 @@ Once you have a model that meets both OpenVINO™ and your requirements, you can
|
||||
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
|
||||
This way inference can use external resources instead of those available to the application itself.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Apart from the default deployment options, you may also :doc:`deploy your application for the TensorFlow framework with OpenVINO Integration <ovtf_integration>`
|
||||
|
||||
|
||||
Apart from the default deployment options, you may also [deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).
|
||||
@endsphinxdirective
|
||||
@@ -1,15 +0,0 @@
|
||||
# OpenVINO™ Deep Learning Workbench Overview {#workbench_docs_Workbench_DG_Introduction}
|
||||
|
||||
@sphinxdirective
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
workbench_docs_Workbench_DG_Install
|
||||
workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets
|
||||
Tutorials <workbench_docs_Workbench_DG_Tutorials>
|
||||
User Guide <workbench_docs_Workbench_DG_User_Guide>
|
||||
workbench_docs_Workbench_DG_Troubleshooting
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -10,15 +10,15 @@
|
||||
openvino_docs_OV_UG_Running_on_multiple_devices
|
||||
openvino_docs_OV_UG_Hetero_execution
|
||||
openvino_docs_OV_UG_Automatic_Batching
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
|
||||
|
||||
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the :doc:`guide on inference devices <openvino_docs_OV_UG_Working_with_devices>`.
|
||||
|
||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||
|
||||
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
|
||||
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
|
||||
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
|
||||
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)
|
||||
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
||||
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
||||
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -9,22 +9,23 @@
|
||||
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
|
||||
omz_tools_downloader
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's :doc:`Open Model Zoo <model_zoo>`.
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).
|
||||
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
|
||||
[OpenVINO™ supports several model formats](../MO_DG/prepare_model/convert_model/supported_model_formats.md) and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
|
||||
[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by [alternating input shapes](../MO_DG/prepare_model/convert_model/Converting_Model.md), [embedding preprocessing](../MO_DG/prepare_model/Additional_Optimizations.md) and [cutting training parts off](../MO_DG/prepare_model/convert_model/Cutting_Model.md).
|
||||
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by :doc:`alternating input shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`embedding preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` and :doc:`cutting training parts off <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`.
|
||||
|
||||
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||
|
||||
Conversion is not required for ONNX and PaddlePaddle models, as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
|
||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
* [See the supported formats and how to use them in your project](../MO_DG/prepare_model/convert_model/supported_model_formats.md)
|
||||
* [Convert different model formats to the OpenVINO IR format](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
* [Automate model-related tasks with Model Downloader and additional OMZ Tools](https://docs.openvino.ai/latest/omz_tools_downloader.html).
|
||||
|
||||
To begin with, you may want to [browse a database of models for use in your projects](../model_zoo.md).
|
||||
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
|
||||
* :doc:`Convert different model formats to the OpenVINO IR format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
* `Automate model-related tasks with Model Downloader and additional OMZ Tools <https://docs.openvino.ai/latest/omz_tools_downloader.html>`__.
|
||||
|
||||
To begin with, you may want to :doc:`browse a database of models for use in your projects <model_zoo>`.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -6,78 +6,103 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
ovtf_integration
|
||||
ote_documentation
|
||||
ovtf_integration
|
||||
ovsa_get_started
|
||||
openvino_inference_engine_tools_compile_tool_README
|
||||
openvino_docs_tuning_utilities
|
||||
workbench_docs_Workbench_DG_Introduction
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
|
||||
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
|
||||
|
||||
### Neural Network Compression Framework (NNCF)
|
||||
Neural Network Compression Framework (NNCF)
|
||||
###########################################
|
||||
|
||||
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
|
||||
|
||||
More resources:
|
||||
* [Documentation](@ref tmo_introduction)
|
||||
* [GitHub](https://github.com/openvinotoolkit/nncf)
|
||||
* [PyPI](https://pypi.org/project/nncf/)
|
||||
|
||||
### OpenVINO™ Security Add-on
|
||||
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
|
||||
|
||||
More resources:
|
||||
* [documentation](https://docs.openvino.ai/latest/ovsa_get_started.html)
|
||||
* [GitHub](https://github.com/openvinotoolkit/security_addon)
|
||||
* :doc:`Documentation <tmo_introduction>`
|
||||
* `GitHub <https://github.com/openvinotoolkit/nncf>`__
|
||||
* `PyPI <https://pypi.org/project/nncf/>`__
|
||||
|
||||
|
||||
### OpenVINO™ integration with TensorFlow (OVTF)
|
||||
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
|
||||
OpenVINO™ Training Extensions
|
||||
#############################
|
||||
|
||||
More resources:
|
||||
* [documentation](https://github.com/openvinotoolkit/openvino_tensorflow)
|
||||
* [PyPI](https://pypi.org/project/openvino-tensorflow/)
|
||||
* [GitHub](https://github.com/openvinotoolkit/openvino_tensorflow)
|
||||
|
||||
### DL Streamer
|
||||
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
||||
|
||||
More resources:
|
||||
* [documentation on GitHub](https://dlstreamer.github.io/index.html)
|
||||
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
|
||||
|
||||
### DL Workbench
|
||||
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting [Intel® DevCloud for the Edge](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html) and launching DL Workbench on-line.
|
||||
|
||||
More resources:
|
||||
* [documentation](dl_workbench_overview.md)
|
||||
* [Docker Hub](https://hub.docker.com/r/openvino/workbench)
|
||||
* [PyPI](https://pypi.org/project/openvino-workbench/)
|
||||
|
||||
### OpenVINO™ Training Extensions (OTE)
|
||||
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
|
||||
|
||||
More resources:
|
||||
* [GitHub](https://github.com/openvinotoolkit/training_extensions)
|
||||
|
||||
### Computer Vision Annotation Tool (CVAT)
|
||||
* :doc:`Overview <ote_documentation>`
|
||||
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
|
||||
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
|
||||
|
||||
OpenVINO™ Security Add-on
|
||||
#########################
|
||||
|
||||
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://docs.openvino.ai/latest/ovsa_get_started.html>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/security_addon>`__
|
||||
|
||||
|
||||
OpenVINO™ integration with TensorFlow (OVTF)
|
||||
############################################
|
||||
|
||||
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://github.com/openvinotoolkit/openvino_tensorflow>`__
|
||||
* `PyPI <https://pypi.org/project/openvino-tensorflow/>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/openvino_tensorflow>`__
|
||||
|
||||
DL Streamer
|
||||
###########
|
||||
|
||||
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation on GitHub <https://dlstreamer.github.io/index.html>`__
|
||||
* `Installation Guide on GitHub <https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide>`__
|
||||
|
||||
DL Workbench
|
||||
############
|
||||
|
||||
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://docs.openvino.ai/2022.3/workbench_docs_Workbench_DG_Introduction.html>`__
|
||||
* `Docker Hub <https://hub.docker.com/r/openvino/workbench>`__
|
||||
* `PyPI <https://pypi.org/project/openvino-workbench/>`__
|
||||
|
||||
Computer Vision Annotation Tool (CVAT)
|
||||
######################################
|
||||
|
||||
An online, interactive video and image annotation tool for computer vision purposes.
|
||||
|
||||
More resources:
|
||||
* [documentation on GitHub](https://opencv.github.io/cvat/docs/)
|
||||
* [web application](https://cvat.org/)
|
||||
* [Docker Hub](https://hub.docker.com/r/openvino/cvat_server)
|
||||
* [GitHub](https://github.com/openvinotoolkit/cvat)
|
||||
|
||||
### Dataset Management Framework (Datumaro)
|
||||
* `Documentation on GitHub <https://opencv.github.io/cvat/docs/>`__
|
||||
* `Web application <https://www.cvat.ai/>`__
|
||||
* `Docker Hub <https://hub.docker.com/r/openvino/cvat_server>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/cvat>`__
|
||||
|
||||
Dataset Management Framework (Datumaro)
|
||||
#######################################
|
||||
|
||||
A framework and CLI tool to build, transform, and analyze datasets.
|
||||
|
||||
More resources:
|
||||
* [documentation on GitHub](https://openvinotoolkit.github.io/datumaro/docs/)
|
||||
* [PyPI](https://pypi.org/project/datumaro/)
|
||||
* [GitHub](https://github.com/openvinotoolkit/datumaro)
|
||||
|
||||
* `Documentation on GitHub <https://openvinotoolkit.github.io/datumaro/docs/>`__
|
||||
* `PyPI <https://pypi.org/project/datumaro/>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,42 +1,55 @@
|
||||
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
|
||||
|
||||
This is all you need:
|
||||
```bash
|
||||
import openvino_tensorflow
|
||||
openvino_tensorflow.set_backend('<backend_name>')
|
||||
```
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
import openvino_tensorflow
|
||||
openvino_tensorflow.set_backend('<backend_name>')
|
||||
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
|
||||
- Intel® CPUs
|
||||
- Intel® integrated GPUs
|
||||
|
||||
> **NOTE**: For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
|
||||
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated [GitHub repository](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs).
|
||||
* Intel® CPUs
|
||||
* Intel® integrated GPUs
|
||||
|
||||
.. note::
|
||||
For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
|
||||
|
||||
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated `GitHub repository <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs>`__.
|
||||
|
||||
|
||||
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the [examples folder](https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples) in our GitHub repository.
|
||||
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the `examples folder <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples>`__ in our GitHub repository.
|
||||
|
||||
Sample tutorials are also hosted on [Intel® DevCloud](https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html). The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
|
||||
Sample tutorials are also hosted on `Intel® DevCloud <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html>`__. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
|
||||
|
||||
## License
|
||||
**OpenVINO™ integration with TensorFlow** is licensed under [Apache License Version 2.0](https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE).
|
||||
License
|
||||
#######
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** is licensed under `Apache License Version 2.0 <https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE>`__.
|
||||
By contributing to the project, you agree to the license and copyright terms therein
|
||||
and release your contribution under these terms.
|
||||
|
||||
## Support
|
||||
Support
|
||||
#######
|
||||
|
||||
Submit your questions, feature requests and bug reports via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
|
||||
Submit your questions, feature requests and bug reports via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
|
||||
|
||||
## How to Contribute
|
||||
How to Contribute
|
||||
#################
|
||||
|
||||
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
|
||||
|
||||
* Share your proposal via [GitHub issues](https://github.com/openvinotoolkit/openvino_tensorflow/issues).
|
||||
* Submit a [pull request](https://github.com/openvinotoolkit/openvino_tensorflow/pulls).
|
||||
* Share your proposal via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
|
||||
* Submit a `pull request <https://github.com/openvinotoolkit/openvino_tensorflow/pulls>`__.
|
||||
|
||||
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
|
||||
|
||||
---
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
40
docs/Documentation/openvino_training_extensions.md
Normal file
40
docs/Documentation/openvino_training_extensions.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# OpenVINO™ Training Extensions {#ote_documentation}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
OpenVINO™ Training Extensions provide a suite of advanced algorithms to train
|
||||
Deep Learning models and convert them using the `OpenVINO™
|
||||
toolkit <https://software.intel.com/en-us/openvino-toolkit>`__ for optimized
|
||||
inference. It allows you to export and convert the models to the needed format. OpenVINO Training Extensions independently create and train the model. It is open-sourced and available on `GitHub <https://github.com/openvinotoolkit/training_extensions>`__. Read the OpenVINO Training Extensions `documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__ to learn more.
|
||||
|
||||
Detailed Workflow
|
||||
#################
|
||||
|
||||
.. image:: ./_static/images/training_extensions_framework.png
|
||||
|
||||
1. To start working with OpenVINO Training Extensions, prepare and annotate your dataset. For example, on CVAT.
|
||||
|
||||
2. OpenVINO Training Extensions train the model, using training interface, and evaluate the model quality on your dataset, using evaluation and inference interfaces.
|
||||
|
||||
.. note::
|
||||
Prepare a separate dataset or split the dataset you have for more accurate quality evaluation.
|
||||
|
||||
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF and POT. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
|
||||
|
||||
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
|
||||
|
||||
OpenVINO Training Extensions Components
|
||||
#######################################
|
||||
|
||||
- `OpenVINO Training Extensions SDK <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_sdk>`__
|
||||
- `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_cli>`__
|
||||
- `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/master/external>`__
|
||||
|
||||
Tutorials
|
||||
#########
|
||||
|
||||
`Object Detection <https://github.com/openvinotoolkit/training_extensions/blob/master/ote_cli/notebooks/train.ipynb>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1002,7 +1002,6 @@ EXCLUDE_SYMBOLS = InferenceEngine::details \
|
||||
ie_api::BlobBuffer \
|
||||
*impl* \
|
||||
*device_name* \
|
||||
*num_requests* \
|
||||
*exec_net* \
|
||||
*c_config* \
|
||||
*ie_core_impl* \
|
||||
|
||||
@@ -1,237 +1,353 @@
|
||||
# How to Implement Custom GPU Operations {#openvino_docs_Extensibility_UG_GPU}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
To enable operations not supported by OpenVINO™ out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
|
||||
|
||||
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
|
||||
|
||||
There are two options for using the custom operation configuration file:
|
||||
|
||||
* Include a section with your kernels into the automatically-loaded `<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file.
|
||||
* Call the `ov::Core::set_property()` method from your application with the `"CONFIG_FILE"` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
||||
@sphinxtabset
|
||||
* Include a section with your kernels into the automatically-loaded ``<lib_path>/cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml`` file.
|
||||
* Call the ``:ref:`ov::Core::set_property() <doxid-classov_1_1_core_1aa953cb0a1601dbc9a34ef6ba82b8476e>``` method from your application with the ``"CONFIG_FILE"`` key and the configuration file name as a value before loading the network that uses custom operations to the plugin:
|
||||
|
||||
@sphinxtab{C++}
|
||||
@snippet docs/snippets/gpu/custom_kernels_api.cpp part0
|
||||
@endsphinxtab
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtab{Python}
|
||||
@snippet docs/snippets/gpu/custom_kernels_api.py part0
|
||||
@endsphinxtab
|
||||
.. tab-item:: C++
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.cpp
|
||||
:language: cpp
|
||||
:fragment: [part0]
|
||||
|
||||
@endsphinxtabset
|
||||
.. tab-item:: Python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.py
|
||||
:language: python
|
||||
:fragment: [part0]
|
||||
|
||||
All OpenVINO samples, except the trivial `hello_classification`, and most Open Model Zoo demos
|
||||
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
|
||||
```sh
|
||||
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
|
||||
-c <absolute_path_to_config>/custom_layer_example.xml
|
||||
```
|
||||
|
||||
## Configuration File Format <a name="config-file-format"></a>
|
||||
All OpenVINO samples, except the trivial ``hello_classification``, and most Open Model Zoo demos
|
||||
feature a dedicated command-line option ``-c`` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
|
||||
|
||||
The configuration file is expected to follow the `.xml` file structure
|
||||
with a node of the type `CustomLayer` for every custom operation you provide.
|
||||
.. code-block:: cpp
|
||||
|
||||
$ ./classification_sample -m <path_to_model>/bvlc_alexnet_fp16.xml -i ./validation_set/daily/227x227/apron.bmp -d GPU
|
||||
-c <absolute_path_to_config>/custom_layer_example.xml
|
||||
|
||||
.. _config-file-format:
|
||||
|
||||
Configuration File Format
|
||||
#########################
|
||||
|
||||
The configuration file is expected to follow the ``.xml`` file structure
|
||||
with a node of the type ``CustomLayer`` for every custom operation you provide.
|
||||
|
||||
The definitions described in the sections below use the following notations:
|
||||
|
||||
Notation | Description
|
||||
---|---
|
||||
(0/1) | Can have zero or one instance of this node or attribute
|
||||
(1) | Must have only one instance of this node or attribute
|
||||
(0+) | Can have any number of instances of this node or attribute
|
||||
(1+) | Can have one or more instances of this node or attribute
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
### CustomLayer Node and Sub-Node Structure
|
||||
* - Notation
|
||||
- Description
|
||||
* - (0/1)
|
||||
- Can have zero or one instance of this node or attribute
|
||||
* - (1)
|
||||
- Must have only one instance of this node or attribute
|
||||
* - (0+)
|
||||
- Can have any number of instances of this node or attribute
|
||||
* - (1+)
|
||||
- Can have one or more instances of this node or attribute
|
||||
|
||||
The `CustomLayer` node contains the entire configuration for a single custom operation.
|
||||
CustomLayer Node and Sub-Node Structure
|
||||
+++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
| Attribute Name |\# | Description |
|
||||
|-----|-----|-----|
|
||||
| `name` | (1) | The name of the operation type to be used. This name should be identical to the type used in the OpenVINO IR.|
|
||||
| `type` | (1) | Must be `SimpleGPU`. |
|
||||
| `version` | (1) | Must be `1`. |
|
||||
The ``CustomLayer`` node contains the entire configuration for a single custom operation.
|
||||
|
||||
**Sub-nodes**: `Kernel` (1), `Buffers` (1), `CompilerOptions` (0+),
|
||||
`WorkSizes` (0/1)
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
### Kernel Node and Sub-Node Structure
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``name``
|
||||
- (1)
|
||||
- The name of the operation type to be used. This name should be identical to the type used in the IR.
|
||||
* - ``type``
|
||||
- (1)
|
||||
- Must be ``SimpleGPU`` .
|
||||
* - ``version``
|
||||
- (1)
|
||||
- Must be ``1`` .
|
||||
|
||||
The `Kernel` node contains all kernel source code configuration.
|
||||
**Sub-nodes**: ``Kernel`` (1), ``Buffers`` (1), ``CompilerOptions`` (0+),
|
||||
``WorkSizes`` (0/1)
|
||||
|
||||
**Sub-nodes**: `Source` (1+), `Define` (0+)
|
||||
Kernel Node and Sub-Node Structure
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
### Source Node and Sub-Node Structure
|
||||
The ``Kernel`` node contains all kernel source code configuration.
|
||||
|
||||
The `Source` node points to a single OpenCL source file.
|
||||
**Sub-nodes**: ``Source`` (1+), ``Define`` (0+)
|
||||
|
||||
| Attribute Name | \# |Description|
|
||||
|-----|-----|-----|
|
||||
| `filename` | (1) | Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order. |
|
||||
Source Node and Sub-Node Structure
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
The ``Source`` node points to a single OpenCL source file.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``filename``
|
||||
- (1)
|
||||
- Name of the file containing OpenCL source code. The path is relative to your executable. Multiple source nodes will have their sources concatenated in order.
|
||||
|
||||
**Sub-nodes**: None
|
||||
|
||||
### Define Node and Sub-Node Structure
|
||||
Define Node and Sub-Node Structure
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
The `Define` node configures a single `#‍define` instruction to be added to
|
||||
The ``Define`` node configures a single ``#define`` instruction to be added to
|
||||
the sources during compilation (JIT).
|
||||
|
||||
| Attribute Name | \# | Description |
|
||||
|------|-------|------|
|
||||
| `name` | (1) | The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string. |
|
||||
| `param` | (0/1) | This parameter value is used as the value of this JIT definition. |
|
||||
| `type` | (0/1) | The parameter type. Accepted values: `int`, `float`, and `int[]`, `float[]` for arrays. |
|
||||
| `default` | (0/1) | The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR. |
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``name``
|
||||
- (1)
|
||||
- The name of the defined JIT. For static constants, this can include the value as well, which is taken as a string.
|
||||
* - ``param``
|
||||
- (0/1)
|
||||
- This parameter value is used as the value of this JIT definition.
|
||||
* - ``type``
|
||||
- (0/1)
|
||||
- The parameter type. Accepted values: ``int`` , ``float`` , and ``int[]`` , ``float[]`` for arrays.
|
||||
* - ``default``
|
||||
- (0/1)
|
||||
- The default value to be used if the specified parameters are missing from the operation in the OpenVINO IR.
|
||||
|
||||
**Sub-nodes:** None
|
||||
|
||||
The resulting JIT has the following form:
|
||||
`#‍define [name] [type] [value/default]`.
|
||||
``#define [name] [type] [value/default]``.
|
||||
|
||||
### Buffers Node and Sub-Node Structure
|
||||
Buffers Node and Sub-Node Structure
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
|
||||
The `Buffers` node configures all input/output buffers for the OpenCL entry
|
||||
The ``Buffers`` node configures all input/output buffers for the OpenCL entry
|
||||
function. No buffers node structure exists.
|
||||
|
||||
**Sub-nodes:** `Data` (0+), `Tensor` (1+)
|
||||
**Sub-nodes:** ``Data`` (0+), ``Tensor`` (1+)
|
||||
|
||||
### Data Node and Sub-Node Structure
|
||||
Data Node and Sub-Node Structure
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
The `Data` node configures a single input with static data, for example,
|
||||
The ``Data`` node configures a single input with static data, for example,
|
||||
weights or biases.
|
||||
|
||||
| Attribute Name | \# | Description |
|
||||
|----|-----|------|
|
||||
| `name` | (1) | Name of a blob attached to an operation in the OpenVINO IR. |
|
||||
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``name``
|
||||
- (1)
|
||||
- Name of a blob attached to an operation in the OpenVINO IR.
|
||||
* - ``arg-index``
|
||||
- (1)
|
||||
- 0-based index in the entry function arguments to be bound to.
|
||||
|
||||
|
||||
**Sub-nodes**: None
|
||||
|
||||
### Tensor Node and Sub-Node Structure
|
||||
Tensor Node and Sub-Node Structure
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
The `Tensor` node configures a single input or output tensor.
|
||||
The ``Tensor`` node configures a single input or output tensor.
|
||||
|
||||
| Attribute Name | \# | Description |
|
||||
|------|-------|-------|
|
||||
| `arg-index` | (1) | 0-based index in the entry function arguments to be bound to. |
|
||||
| `type` | (1) | `input` or `output` |
|
||||
| `port-index` | (1) | 0-based index in the operation input/output ports in the OpenVINO IR |
|
||||
| `format` | (0/1) | Data layout declaration for the tensor. Accepted values: `BFYX`, `BYXF`, `YXFB`, `FYXB`(also in lowercase). The default value: `BFYX` |
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
### CompilerOptions Node and Sub-Node Structure
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``arg-index``
|
||||
- (1)
|
||||
- 0-based index in the entry function arguments to be bound to.
|
||||
* - ``type``
|
||||
- (1)
|
||||
- ``input`` or ``output``
|
||||
* - ``port-index``
|
||||
- (1)
|
||||
- 0-based index in the operation input/output ports in the OpenVINO IR
|
||||
* - ``format``
|
||||
- (0/1)
|
||||
- Data layout declaration for the tensor. Accepted values: ``BFYX`` , ``BYXF`` , ``YXFB`` , ``FYXB`` , and same values in all lowercase. Default value: ``BFYX``.
|
||||
|
||||
The `CompilerOptions` node configures the compilation flags for the OpenCL
|
||||
CompilerOptions Node and Sub-Node Structure
|
||||
+++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
The ``CompilerOptions`` node configures the compilation flags for the OpenCL
|
||||
sources.
|
||||
|
||||
| Attribute Name | \# | Description |
|
||||
|--------|-----|------|
|
||||
| `options` | (1) | Options string to be passed to the OpenCL compiler |
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``options``
|
||||
- (1)
|
||||
- Options string to be passed to the OpenCL compiler
|
||||
|
||||
**Sub-nodes**: None
|
||||
|
||||
### WorkSizes Node and Sub-Node Structure
|
||||
WorkSizes Node and Sub-Node Structure
|
||||
+++++++++++++++++++++++++++++++++++++
|
||||
|
||||
The `WorkSizes` node configures the global/local work sizes to be used when
|
||||
The ``WorkSizes`` node configures the global/local work sizes to be used when
|
||||
queuing an OpenCL program for execution.
|
||||
|
||||
| Attribute Name | \# | Description |
|
||||
|-----|------|-----|
|
||||
| `global`<br>`local` | (0/1)<br>(0/1) | An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution.<br> The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. <br>Default value: `global=”B*F*Y*X” local=””` |
|
||||
| `dim` | (0/1) | A tensor to take the work-size from. Accepted values: `input N`, `output`, where `N` is an index of input tensor starting with 0. The default value: `output` |
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
* - Attribute Name
|
||||
- #
|
||||
- Description
|
||||
* - ``global`` ``local``
|
||||
- (0/1) (0/1)
|
||||
- An array of up to three integers or formulas for defining OpenCL work-sizes to be used during execution. The formulas can use the values of the B,F,Y,X dimensions and contain the operators: +,-,/,\*,%. All operators are evaluated in integer arithmetic. Default value: ``global=”B\*F\*Y\*X” local=””``
|
||||
* - ``dim``
|
||||
- (0/1)
|
||||
- A tensor to take the work-size from. Accepted values: ``input N`` , ``output`` , where ``N`` is an index of input tensor starting with 0. Default value: ``output``
|
||||
|
||||
**Sub-nodes**: None
|
||||
|
||||
## Example Configuration File
|
||||
Example Configuration File
|
||||
##########################
|
||||
|
||||
The following code sample provides an example configuration file in XML
|
||||
format. For information on the configuration file structure, see the
|
||||
[Configuration File Format](#config-file-format).
|
||||
```xml
|
||||
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
|
||||
<Kernel entry="example_relu_kernel">
|
||||
<Source filename="custom_layer_kernel.cl"/>
|
||||
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
|
||||
</Kernel>
|
||||
<Buffers>
|
||||
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
|
||||
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
|
||||
</Buffers>
|
||||
<CompilerOptions options="-cl-mad-enable"/>
|
||||
<WorkSizes global="X,Y,B*F"/>
|
||||
</CustomLayer>
|
||||
```
|
||||
format. For information on the configuration file structure, see the `Configuration File Format <#config-file-format>`__.
|
||||
|
||||
## Built-In Definitions for Custom Layers
|
||||
.. code-block:: cpp
|
||||
|
||||
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
|
||||
<Kernel entry="example_relu_kernel">
|
||||
<Source filename="custom_layer_kernel.cl"/>
|
||||
<Define name="neg_slope" type="float" param="negative_slope" default="0.0"/>
|
||||
</Kernel>
|
||||
<Buffers>
|
||||
<Tensor arg-index="0" type="input" port-index="0" format="BFYX"/>
|
||||
<Tensor arg-index="1" type="output" port-index="0" format="BFYX"/>
|
||||
</Buffers>
|
||||
<CompilerOptions options="-cl-mad-enable"/>
|
||||
<WorkSizes global="X,Y,B*F"/>
|
||||
</CustomLayer>
|
||||
|
||||
|
||||
Built-In Definitions for Custom Layers
|
||||
######################################
|
||||
|
||||
The following table includes definitions that are attached before
|
||||
user sources.
|
||||
|
||||
For an example, see [Example Kernel](#example-kernel).
|
||||
For an example, see `Example Kernel <#example-kernel>`__.
|
||||
|
||||
| Name | Value |
|
||||
|---|---|
|
||||
| `NUM_INPUTS` | Number of the input tensors bound to this kernel. |
|
||||
| `GLOBAL_WORKSIZE` | An array of global work sizes used to execute this kernel. |
|
||||
| `GLOBAL_WORKSIZE_SIZE` | The size of the `GLOBAL_WORKSIZE` array. |
|
||||
| `LOCAL_WORKSIZE` | An array of local work sizes used to execute this kernel. |
|
||||
| `LOCAL_WORKSIZE_SIZE` | The size of the `LOCAL_WORKSIZE` array. |
|
||||
| `<TENSOR>_DIMS`| An array of the tensor dimension sizes. Always ordered as `BFYX`. |
|
||||
| `<TENSOR>_DIMS_SIZE`| The size of the `<TENSOR>_DIMS` array.|
|
||||
| `<TENSOR>_TYPE`| The datatype of the tensor: `float`, `half`, or `char`. |
|
||||
| `<TENSOR>_FORMAT_<TENSOR_FORMAT>` | The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with `#‍ifdef/#‍endif`. |
|
||||
| `<TENSOR>_LOWER_PADDING` | An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.|
|
||||
| `<TENSOR>_LOWER_PADDING_SIZE` | The size of the `<TENSOR>_LOWER_PADDING` array. |
|
||||
| `<TENSOR>_UPPER_PADDING` | An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX. |
|
||||
| `<TENSOR>_UPPER_PADDING_SIZE` | The size of the `<TENSOR>_UPPER_PADDING` array. |
|
||||
| `<TENSOR>_PITCHES` | The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX. |
|
||||
| `<TENSOR>_PITCHES_SIZE`| The size of the `<TENSOR>_PITCHES` array. |
|
||||
| `<TENSOR>_OFFSET`| The number of elements from the start of the tensor to the first valid element, bypassing the lower padding. |
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
All `<TENSOR>` values are automatically defined for every tensor
|
||||
bound to this operation, such as `INPUT0`, `INPUT1`, and `OUTPUT0`, as shown
|
||||
* - Name
|
||||
- Value
|
||||
* - ``NUM_INPUTS``
|
||||
- Number of the input tensors bound to this kernel
|
||||
* - ``GLOBAL_WORKSIZE``
|
||||
- An array of global work sizes used to execute this kernel
|
||||
* - ``GLOBAL_WORKSIZE_SIZE``
|
||||
- The size of the ``GLOBAL_WORKSIZE`` array
|
||||
* - ``LOCAL_WORKSIZE``
|
||||
- An array of local work sizes used to execute this kernel
|
||||
* - ``LOCAL_WORKSIZE_SIZE``
|
||||
- The size of the ``LOCAL_WORKSIZE`` array
|
||||
* - ``<TENSOR>_DIMS``
|
||||
- An array of the tensor dimension sizes. Always ordered as ``BFYX``
|
||||
* - ``<TENSOR>_DIMS_SIZE``
|
||||
- The size of the ``<TENSOR>_DIMS`` array.
|
||||
* - ``<TENSOR>_TYPE``
|
||||
- The datatype of the tensor: ``float`` , ``half`` , or ``char``
|
||||
* - ``<TENSOR>_FORMAT_<TENSOR_FORMAT>``
|
||||
- The format of the tensor, BFYX, BYXF, YXFB , FYXB, or ANY. The format is concatenated to the defined name. You can use the tensor format to define codepaths in your code with ``#ifdef/#endif`` .
|
||||
* - ``<TENSOR>_LOWER_PADDING``
|
||||
- An array of padding elements used for the tensor dimensions before they start. Always ordered as BFYX.
|
||||
* - ``<TENSOR>_LOWER_PADDING_SIZE``
|
||||
- The size of the ``<TENSOR>_LOWER_PADDING`` array
|
||||
* - ``<TENSOR>_UPPER_PADDING``
|
||||
- An array of padding elements used for the tensor dimensions after they end. Always ordered as BFYX.
|
||||
* - ``<TENSOR>_UPPER_PADDING_SIZE``
|
||||
- The size of the ``<TENSOR>_UPPER_PADDING`` array
|
||||
* - ``<TENSOR>_PITCHES``
|
||||
- The offset (in elements) between adjacent elements in each dimension. Always ordered as BFYX.
|
||||
* - ``<TENSOR>_PITCHES_SIZE``
|
||||
- The size of the ``<TENSOR>_PITCHES`` array
|
||||
* - ``<TENSOR>_OFFSET``
|
||||
- The number of elements from the start of the tensor to the first valid element, bypassing the lower padding.
|
||||
|
||||
All ``<TENSOR>`` values are automatically defined for every tensor
|
||||
bound to this operation, such as ``INPUT0``, ``INPUT1``, and ``OUTPUT0``, as shown
|
||||
in the following example:
|
||||
|
||||
```c
|
||||
#define INPUT0_DIMS_SIZE 4
|
||||
#define INPUT0_DIMS (int []){ 1,96,55,55, }
|
||||
```
|
||||
.. code-block:: c
|
||||
|
||||
## Example Kernel<a name="example-kernel"></a>
|
||||
#define INPUT0_DIMS_SIZE 4
|
||||
#define INPUT0_DIMS (int []){ 1,96,55,55, }
|
||||
|
||||
```c
|
||||
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
|
||||
__kernel void example_relu_kernel(
|
||||
const __global INPUT0_TYPE* input0,
|
||||
__global OUTPUT0_TYPE* output)
|
||||
{
|
||||
const uint idx = get_global_id(0);
|
||||
const uint idy = get_global_id(1);
|
||||
const uint idbf = get_global_id(2); // batches*features, as OpenCL supports 3D nd-ranges only
|
||||
const uint feature = idbf % OUTPUT0_DIMS[1];
|
||||
const uint batch = idbf / OUTPUT0_DIMS[1];
|
||||
//notice that pitches are in elements, not in bytes!
|
||||
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
|
||||
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
|
||||
.. _example-kernel:
|
||||
|
||||
INPUT0_TYPE value = input0[in_id];
|
||||
// neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
|
||||
output[out_id] = value < 0 ? value * neg_slope : value;
|
||||
}
|
||||
```
|
||||
Example Kernel
|
||||
##############
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
> **NOTE**: As described in the previous section, all items such as the
|
||||
> `INPUT0_TYPE` are actually defined as OpenCL (pre-)compiler inputs by
|
||||
> OpenVINO for efficiency reasons. See the [Debugging
|
||||
> Tips](#debugging-tips) below for information on debugging the results.
|
||||
#pragma OPENCL EXTENSION cl_khr_fp16 : enable
|
||||
__kernel void example_relu_kernel(
|
||||
const __global INPUT0_TYPE* input0,
|
||||
__global OUTPUT0_TYPE* output)
|
||||
{
|
||||
const uint idx = get_global_id(0);
|
||||
const uint idy = get_global_id(1);
|
||||
const uint idbf = get_global_id(2); // batches*features, as OpenCL supports 3D nd-ranges only
|
||||
const uint feature = idbf % OUTPUT0_DIMS[1];
|
||||
const uint batch = idbf / OUTPUT0_DIMS[1];
|
||||
//notice that pitches are in elements, not in bytes!
|
||||
const uint in_id = batch*INPUT0_PITCHES[0] + feature*INPUT0_PITCHES[1] + idy*INPUT0_PITCHES[2] + idx*INPUT0_PITCHES[3] + INPUT0_OFFSET;
|
||||
const uint out_id = batch*OUTPUT0_PITCHES[0] + feature*OUTPUT0_PITCHES[1] + idy*OUTPUT0_PITCHES[2] + idx*OUTPUT0_PITCHES[3] + OUTPUT0_OFFSET;
|
||||
|
||||
## Debugging Tips<a name="debugging-tips"></a>
|
||||
INPUT0_TYPE value = input0[in_id];
|
||||
// neg_slope (which is non-zero for leaky ReLU) is put automatically as #define, refer to the config xml
|
||||
output[out_id] = value < 0 ? value * neg_slope : value;
|
||||
}
|
||||
|
||||
**Using `printf` in the OpenCL™ Kernels**.
|
||||
To debug the specific values, use `printf` in your kernels.
|
||||
.. _debugging-tips:
|
||||
|
||||
.. note::
|
||||
As described in the previous section, all items such as the ``INPUT0_TYPE`` are actually defined as OpenCL (pre-)compiler inputs by OpenVINO for efficiency reasons. See the `Debugging Tips <#debugging-tips>`__ below for information on debugging the results.
|
||||
|
||||
Debugging Tips
|
||||
##############
|
||||
|
||||
**Using ``printf`` in the OpenCL™ Kernels**.
|
||||
To debug the specific values, use ``printf`` in your kernels.
|
||||
However, be careful not to output excessively, which
|
||||
could generate too much data. The `printf` output is typical, so
|
||||
could generate too much data. The ``printf`` output is typical, so
|
||||
your output can be truncated to fit the buffer. Also, because of
|
||||
buffering, you actually get an entire buffer of output when the
|
||||
execution ends.<br>
|
||||
execution ends.
|
||||
|
||||
For more information, refer to the [printf Function](https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html).
|
||||
For more information, refer to the `printf Function <https://www.khronos.org/registry/OpenCL/sdk/1.2/docs/man/xhtml/printfFunction.html>`__.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -18,12 +18,9 @@
|
||||
openvino_docs_transformations
|
||||
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
|
||||
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
|
||||
each of the supported frameworks. To see the operations supported by your framework, refer to
|
||||
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
|
||||
each of the supported frameworks. To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>`.
|
||||
|
||||
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
|
||||
|
||||
@@ -35,31 +32,33 @@ Importing models with such operations requires additional steps. This guide illu
|
||||
|
||||
Defining a new custom operation basically consists of two parts:
|
||||
|
||||
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for [GPU](./GPU_Extensibility.md) is described in separate guides.
|
||||
1. Definition of operation semantics in OpenVINO, the code that describes how this operation should be inferred consuming input tensor(s) and producing output tensor(s). The implementation of execution kernels for :doc:`GPU <openvino_docs_Extensibility_UG_GPU>` is described in separate guides.
|
||||
|
||||
2. Mapping rule that facilitates conversion of framework operation representation to OpenVINO defined operation semantics.
|
||||
|
||||
The first part is required for inference. The second part is required for successful import of a model containing such operations from the original framework model format. There are several options to implement each part. The following sections will describe them in detail.
|
||||
|
||||
## Definition of Operation Semantics
|
||||
Definition of Operation Semantics
|
||||
#################################
|
||||
|
||||
If the custom operation can be mathematically represented as a combination of exiting OpenVINO operations and such decomposition gives desired performance, then low-level operation implementation is not required. Refer to the latest OpenVINO operation set, when deciding feasibility of such decomposition. You can use any valid combination of exiting operations. The next section of this document describes the way to map a custom operation.
|
||||
|
||||
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the [Custom Operation Guide](add_openvino_ops.md).
|
||||
If such decomposition is not possible or appears too bulky with a large number of constituent operations that do not perform well, then a new class for the custom operation should be implemented, as described in the :doc:`Custom Operation Guide <openvino_docs_Extensibility_UG_add_openvino_ops>`.
|
||||
|
||||
You might prefer implementing a custom operation class if you already have a generic C++ implementation of operation kernel. Otherwise, try to decompose the operation first, as described above. Then, after verifying correctness of inference and resulting performance, you may move on to optional implementation of Bare Metal C++.
|
||||
|
||||
## Mapping from Framework Operation
|
||||
Mapping from Framework Operation
|
||||
################################
|
||||
|
||||
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
|
||||
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX) or PaddlePaddle formats, then one of the classes from [Frontend Extension API](frontend_extensions.md) should be used. It consists of several classes available in C++ which can be used with the `--extensions` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the `read_model` method. Python API is also available for runtime model import.
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
|
||||
|
||||
2. If a model is represented in the TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
|
||||
If you are implementing extensions for new ONNX or PaddlePaddle frontends and plan to use the `--extensions` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
If you are implementing extensions for new ONNX, PaddlePaddle or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
|
||||
1. Implemented in C++ only.
|
||||
|
||||
@@ -69,109 +68,123 @@ Model Optimizer does not support new frontend extensions written in Python API.
|
||||
|
||||
Remaining part of this guide describes application of Frontend Extension API for new frontends.
|
||||
|
||||
## Registering Extensions
|
||||
Registering Extensions
|
||||
######################
|
||||
|
||||
A custom operation class and a new mapping frontend extension class object should be registered to be usable in OpenVINO runtime.
|
||||
|
||||
> **NOTE**: This documentation is derived from the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates the details of extension development. It is based on minimalistic `Identity` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
|
||||
.. note::
|
||||
This documentation is derived from the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__, which demonstrates the details of extension development. It is based on minimalistic ``Identity`` operation that is a placeholder for your real custom operation. Review the complete, fully compilable code to see how it works.
|
||||
|
||||
Use the `ov::Core::add_extension` method to load the extensions to the `ov::Core` object. This method allows loading library with extensions or extensions from the code.
|
||||
Use the ``:ref:`ov::Core::add_extension <doxid-classov_1_1_core_1a68d0dea1cbcd42a67bea32780e32acea>``` method to load the extensions to the ``:ref:`ov::Core <doxid-classov_1_1_core>``` object. This method allows loading library with extensions or extensions from the code.
|
||||
|
||||
### Load Extensions to Core
|
||||
Load Extensions to Core
|
||||
+++++++++++++++++++++++
|
||||
|
||||
Extensions can be loaded from a code with the `ov::Core::add_extension` method:
|
||||
Extensions can be loaded from a code with the ``:ref:`ov::Core::add_extension <doxid-classov_1_1_core_1a68d0dea1cbcd42a67bea32780e32acea>``` method:
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtab{C++}
|
||||
|
||||
@snippet docs/snippets/ov_extensions.cpp add_extension
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
|
||||
@snippet docs/snippets/ov_extensions.py add_extension
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
The `Identity` is a custom operation class defined in [Custom Operation Guide](add_openvino_ops.md). This is sufficient to enable reading OpenVINO IR which uses the `Identity` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: C++
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: add_frontend_extension
|
||||
|
||||
.. tab:: Python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: add_frontend_extension
|
||||
|
||||
@endsphinxdirective
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [add_extension]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_extension]
|
||||
|
||||
|
||||
The ``Identity`` is a custom operation class defined in :doc:`Custom Operation Guide <openvino_docs_Extensibility_UG_add_openvino_ops>`. This is sufficient to enable reading OpenVINO IR which uses the ``Identity`` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [add_frontend_extension]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_frontend_extension]
|
||||
|
||||
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
|
||||
|
||||
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
|
||||
|
||||
### Create a Library with Extensions
|
||||
|
||||
Create a Library with Extensions
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
An extension library should be created in the following cases:
|
||||
|
||||
- Conversion of a model with custom operations in Model Optimizer.
|
||||
- Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
|
||||
- Loading models with custom operations in tools that support loading extensions from a library, for example the `benchmark_app`.
|
||||
* Conversion of a model with custom operations in Model Optimizer.
|
||||
* Loading a model with custom operations in a Python application. This applies to both framework model and OpenVINO IR.
|
||||
* Loading models with custom operations in tools that support loading extensions from a library, for example the ``benchmark_app``.
|
||||
|
||||
To create an extension library, for example, to load the extensions into Model Optimizer, perform the following:
|
||||
|
||||
1. Create an entry point for extension library. OpenVINO provides the `OPENVINO_CREATE_EXTENSIONS()` macro, which allows to define an entry point to a library with OpenVINO Extensions.
|
||||
1. Create an entry point for extension library. OpenVINO provides the ``:ref:`OPENVINO_CREATE_EXTENSIONS() <doxid-core_2include_2openvino_2core_2extension_8hpp_1acdadcfa0eff763d8b4dadb8a9cb6f6e6>``` macro, which allows to define an entry point to a library with OpenVINO Extensions.
|
||||
This macro should have a vector of all OpenVINO Extensions as an argument.
|
||||
|
||||
Based on that, the declaration of an extension class might look like the following:
|
||||
|
||||
@snippet template_extension/new/ov_extension.cpp ov_extension:entry_point
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/ov_extension.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov_extension:entry_point]
|
||||
|
||||
2. Configure the build of your extension library, using the following CMake script:
|
||||
|
||||
@snippet template_extension/new/CMakeLists.txt cmake:extension
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/CMakeLists.txt
|
||||
:language: cpp
|
||||
:fragment: [cmake:extension]
|
||||
|
||||
This CMake script finds OpenVINO, using the `find_package` CMake command.
|
||||
This CMake script finds OpenVINO, using the ``find_package`` CMake command.
|
||||
|
||||
3. Build the extension library, running the commands below:
|
||||
|
||||
```sh
|
||||
$ cd src/core/template_extension/new
|
||||
$ mkdir build
|
||||
$ cd build
|
||||
$ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
|
||||
$ cmake --build .
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
$ cd src/core/template_extension/new
|
||||
$ mkdir build
|
||||
$ cd build
|
||||
$ cmake -DOpenVINO_DIR=<OpenVINO_DIR> ../
|
||||
$ cmake --build .
|
||||
|
||||
|
||||
4. After the build, you may use the path to your extension library to load your extensions to OpenVINO Runtime:
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@sphinxtab{C++}
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [add_extension_lib]
|
||||
|
||||
@snippet docs/snippets/ov_extensions.cpp add_extension_lib
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_extension_lib]
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@sphinxtab{Python}
|
||||
See Also
|
||||
########
|
||||
|
||||
@snippet docs/snippets/ov_extensions.py add_extension_lib
|
||||
* :doc:`OpenVINO Transformations <openvino_docs_transformations>`
|
||||
* :doc:`Using OpenVINO Runtime Samples <openvino_docs_OV_UG_Samples_Overview>`
|
||||
* :doc:`Hello Shape Infer SSD sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
|
||||
|
||||
@endsphinxtab
|
||||
|
||||
@endsphinxtabset
|
||||
|
||||
## See Also
|
||||
|
||||
* [OpenVINO Transformations](./ov_transformations.md)
|
||||
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
|
||||
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,59 +1,81 @@
|
||||
# Custom OpenVINO™ Operations {#openvino_docs_Extensibility_UG_add_openvino_ops}
|
||||
|
||||
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using `add_extension` API. Please refer to [Create library with extensions](Intro.md#create-library-with-extensions) for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
|
||||
@sphinxdirective
|
||||
|
||||
## Operation Class
|
||||
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_library_with_extensions>` for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
|
||||
|
||||
To add your custom operation, create a new class that extends `ov::Op`, which is in turn derived from `ov::Node`, the base class for all graph operations in OpenVINO™. To add `ov::Op` please include next file:
|
||||
Operation Class
|
||||
###############
|
||||
|
||||
@snippet template_extension/new/identity.hpp op:common_include
|
||||
To add your custom operation, create a new class that extends ``ov::Op``, which is in turn derived from ``:ref:`ov::Node <doxid-classov_1_1_node>```, the base class for all graph operations in OpenVINO™. To add ``ov::Op``, include the next file:
|
||||
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/identity.hpp
|
||||
:language: cpp
|
||||
:fragment: [op:common_include]
|
||||
|
||||
Follow the steps below to add a custom operation:
|
||||
|
||||
1. Add the `OPENVINO_OP` macro which defines a `NodeTypeInfo` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an operation currently consists of a string operation identifier and a string for operation version.
|
||||
1. Add the ``OPENVINO_OP`` macro which defines a ``NodeTypeInfo`` object that identifies the type of the operation to the graph users and helps with dynamic type resolution. The type info of an operation currently consists of a string operation identifier and a string for operation version.
|
||||
|
||||
2. Implement default constructor and constructors that optionally take the operation inputs and attributes as parameters.
|
||||
|
||||
3. Override the shape inference method `validate_and_infer_types`. This method is called multiple times during graph manipulations to determine the shapes and element types of the operations outputs. To access the input shapes and input element types, use the `get_input_partial_shape()` and `get_input_element_type()` methods of `ov::Node`. Set the inferred shape and element type of the output using `set_output_type`.
|
||||
3. Override the shape inference method ``validate_and_infer_types``. This method is called multiple times during graph manipulations to determine the shapes and element types of the operations outputs. To access the input shapes and input element types, use the ``get_input_partial_shape()`` and ``get_input_element_type()`` methods of ``:ref:`ov::Node <doxid-classov_1_1_node>```. Set the inferred shape and element type of the output using ``set_output_type``.
|
||||
|
||||
4. Override the `clone_with_new_inputs` method, which enables graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
|
||||
4. Override the ``clone_with_new_inputs`` method, which enables graph manipulation routines to create copies of this operation and connect it to different nodes during optimization.
|
||||
|
||||
5. Override the `visit_attributes` method, which enables serialization and deserialization of operation attributes. An `AttributeVisitor` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware `on_attribute` helper. Helpers are already implemented for standard C++ types like `int64_t`, `float`, `bool`, `vector`, and for existing OpenVINO defined types.
|
||||
5. Override the ``visit_attributes`` method, which enables serialization and deserialization of operation attributes. An ``AttributeVisitor`` is passed to the method, and the implementation is expected to walk over all the attributes in the op using the type-aware ``on_attribute`` helper. Helpers are already implemented for standard C++ types like ``int64_t``, ``float``, ``bool``, ``vector``, and for existing OpenVINO defined types.
|
||||
|
||||
6. Override `evaluate`, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains `evaluate` method you also need to override the `has_evaluate` method, this method allows to get information about availability of `evaluate` method for the operation.
|
||||
6. Override ``evaluate``, which is an optional method that enables fallback of some devices to this implementation and the application of constant folding if there is a custom operation on the constant branch. If your operation contains ``evaluate`` method you also need to override the ``has_evaluate`` method, this method allows to get information about availability of ``evaluate`` method for the operation.
|
||||
|
||||
Based on that, declaration of an operation class can look as follows:
|
||||
|
||||
|
||||
### Operation Constructors
|
||||
Operation Constructors
|
||||
++++++++++++++++++++++
|
||||
|
||||
OpenVINO™ operation contains two constructors:
|
||||
|
||||
* Default constructor, which enables you to create an operation without attributes
|
||||
* Constructor that creates and validates an operation with specified inputs and attributes
|
||||
|
||||
@snippet template_extension/new/identity.cpp op:ctor
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
|
||||
:language: cpp
|
||||
:fragment: [op:ctor]
|
||||
|
||||
### `validate_and_infer_types()`
|
||||
``validate_and_infer_types()``
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
`ov::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of the operation.
|
||||
``:ref:`ov::Node::validate_and_infer_types <doxid-classov_1_1_node_1ac5224b5be848ec670d2078d9816d12e7>``` method validates operation attributes and calculates output shapes using attributes of the operation.
|
||||
|
||||
@snippet template_extension/new/identity.cpp op:validate
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
|
||||
:language: cpp
|
||||
:fragment: [op:validate]
|
||||
|
||||
### `clone_with_new_inputs()`
|
||||
``clone_with_new_inputs()``
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
`ov::Node::clone_with_new_inputs` method creates a copy of the operation with new inputs.
|
||||
``:ref:`ov::Node::clone_with_new_inputs <doxid-classov_1_1_node_1a04cb103fa069c3b7944ab7c44d94f5ff>``` method creates a copy of the operation with new inputs.
|
||||
|
||||
@snippet template_extension/new/identity.cpp op:copy
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
|
||||
:language: cpp
|
||||
:fragment: [op:copy]
|
||||
|
||||
### `visit_attributes()`
|
||||
``visit_attributes()``
|
||||
++++++++++++++++++++++
|
||||
|
||||
`ov::Node::visit_attributes` method enables you to visit all operation attributes.
|
||||
``:ref:`ov::Node::visit_attributes <doxid-classov_1_1_node_1a9743b56d352970486d17dae2416d958e>``` method enables you to visit all operation attributes.
|
||||
|
||||
@snippet template_extension/new/identity.cpp op:visit_attributes
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
|
||||
:language: cpp
|
||||
:fragment: [op:visit_attributes]
|
||||
|
||||
### evaluate() and has_evaluate()
|
||||
``evaluate() and has_evaluate()``
|
||||
+++++++++++++++++++++++++++++++++
|
||||
|
||||
`ov::Node::evaluate` method enables you to apply constant folding to an operation.
|
||||
``:ref:`ov::Node::evaluate <doxid-classov_1_1_node_1acfb82acc8349d7138aeaa05217c7014e>``` method enables you to apply constant folding to an operation.
|
||||
|
||||
@snippet template_extension/new/identity.cpp op:evaluate
|
||||
.. doxygensnippet:: ./src/core/template_extension/new/identity.cpp
|
||||
:language: cpp
|
||||
:fragment: [op:evaluate]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,14 +1,18 @@
|
||||
# Frontend Extensions {#openvino_docs_Extensibility_UG_Frontend_Extensions}
|
||||
|
||||
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to [Introduction to OpenVINO Extension](Intro.md) to understand entire flow.
|
||||
@sphinxdirective
|
||||
|
||||
This API is applicable for new frontends only, which exist for ONNX and PaddlePaddle. If a different model format is used, follow legacy [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) guide.
|
||||
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to understand entire flow.
|
||||
|
||||
> **NOTE**: This documentation is written based on the [Template extension](https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new), which demonstrates extension development details based on minimalistic `Identity` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
|
||||
This API is applicable for new frontends only, which exist for ONNX, PaddlePaddle and TensorFlow. If a different model format is used, follow legacy :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
|
||||
|
||||
## Single Operation Mapping with OpExtension
|
||||
.. note::
|
||||
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__, which demonstrates extension development details based on minimalistic ``Identity`` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
|
||||
|
||||
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is `OpExtension` class that works well if all the following conditions are satisfied:
|
||||
Single Operation Mapping with OpExtension
|
||||
#########################################
|
||||
|
||||
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension`` class that works well if all the following conditions are satisfied:
|
||||
|
||||
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
|
||||
|
||||
@@ -20,63 +24,87 @@ This section covers the case when a single operation in framework representation
|
||||
|
||||
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
|
||||
|
||||
> **NOTE**: `OpExtension` class is currently available for ONNX frontend only. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
|
||||
.. note::
|
||||
``OpExtension`` class is currently available for ONNX and TensorFlow frontends. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
|
||||
|
||||
The next example maps ONNX operation with type [“Identity”]( https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) to OpenVINO template extension `Identity` class.
|
||||
The next example maps ONNX operation with type `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__ to OpenVINO template extension ``Identity`` class.
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_Identity_header
|
||||
@snippet ov_extensions.cpp frontend_extension_Identity
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_Identity_header]
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_Identity]
|
||||
|
||||
The mapping doesn’t involve any attributes, as operation Identity doesn’t have them.
|
||||
|
||||
Extension objects, like just constructed `extension` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
|
||||
Extension objects, like just constructed ``extension`` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_read_model
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_read_model]
|
||||
|
||||
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or `benchmark_app`. Read about how to build and load such library in chapter “Create library with extensions” in [Introduction to OpenVINO Extension](Intro.md).
|
||||
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or ``benchmark_app``. Read about how to build and load such library in chapter “Create library with extensions” in :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
|
||||
|
||||
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces `f32` data type then operation that consumes this output should also support `f32`. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
|
||||
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces ``f32`` data type then operation that consumes this output should also support ``f32``. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
|
||||
|
||||
### Converting to Standard OpenVINO Operation
|
||||
Converting to Standard OpenVINO Operation
|
||||
+++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
`OpExtension` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like `TemplateExtension::Identity` implemented.
|
||||
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
|
||||
|
||||
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> `Relu` mapping should be used:
|
||||
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> ``Relu`` mapping should be used:
|
||||
|
||||
@sphinxtabset
|
||||
.. tab-set::
|
||||
|
||||
@sphinxtab{C++}
|
||||
@snippet ov_extensions.cpp frontend_extension_MyRelu
|
||||
@endsphinxtab
|
||||
@sphinxtab{Python}
|
||||
@snippet ov_extensions.py py_frontend_extension_MyRelu
|
||||
@endsphinxtab
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@endsphinxtabset
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_MyRelu]
|
||||
|
||||
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation `Relu` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a `ov::opset8::Relu` class name as a template parameter for `OpExtension`. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with `TemplateExtension::Identity`.
|
||||
.. tab-item:: Python
|
||||
:sync: python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_MyRelu]
|
||||
|
||||
### Attributes Mapping
|
||||
|
||||
As described above, `OpExtension` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on `visit_attributes` method that should be defined for any OpenVINO operation.
|
||||
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation ``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a template parameter for ``OpExtension``. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with ``TemplateExtension::Identity``.
|
||||
|
||||
Imagine you have CustomOperation class implementation that has two attributes with names `attr1` and `attr2`:
|
||||
Attributes Mapping
|
||||
++++++++++++++++++
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_CustomOperation
|
||||
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
|
||||
|
||||
And original model in framework representation also has operation with name “CustomOperatoin” with the same `attr1` and `attr2` attributes. Then with the following code:
|
||||
Imagine you have CustomOperation class implementation that has two attributes with names ``attr1`` and ``attr2``:
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_CustomOperation_as_is
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation]
|
||||
|
||||
both `attr1` and `attr2` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in `OpExtension` constructor:
|
||||
And original model in framework representation also has operation with name “CustomOperatoin” with the same ``attr1`` and ``attr2`` attributes. Then with the following code:
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_as_is]
|
||||
|
||||
Where `fw_attr1` and `fw_attr2` are names for corresponding attributes in framework operation representation.
|
||||
both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in ``OpExtension`` constructor:
|
||||
|
||||
If copying of an attribute is not what you need, `OpExtension` also can set attribute to predefined constant value. For the same `CustomOperation`, imagine you want to set `attr2` to value 5 instead of copying from `fw_attr2`, to achieve that do the following:
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_rename]
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_CustomOperation_rename_set
|
||||
Where ``fw_attr1`` and ``fw_attr2`` are names for corresponding attributes in framework operation representation.
|
||||
|
||||
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value. For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``, to achieve that do the following:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_rename_set]
|
||||
|
||||
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
|
||||
|
||||
@@ -88,46 +116,89 @@ So the conclusion is that each attribute of target OpenVINO operation should be
|
||||
|
||||
This is achieved by specifying maps as arguments for `OpExtension` constructor.
|
||||
|
||||
### Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
|
||||
|
||||
## Mapping to Multiple Operations with ConversionExtension
|
||||
> **NOTE**: Below solution works only for ONNX and Tensorflow frontends.
|
||||
|
||||
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make `OpExtension` usable.
|
||||
`OPENVINO_FRAMEWORK_MAP` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify the mapping between this operation to a frontend operation.
|
||||
|
||||
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated `ConversionExtension` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
|
||||
Let's consider the following example. Imagine you have an ONNX model with `CustomOp` operation (and this operation has `mode` attribute) and a Tensorflow model with `CustomOpV3` operation (this operation has `axis` attribute) and both of them can be implemented with a single OpenVINO operation `CustomOp` like follows:
|
||||
|
||||
`ConversionExtension` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter [Build a Model in OpenVINO Runtime](@ref ov_ug_build_model) to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
|
||||
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_headers
|
||||
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_CustomOp
|
||||
|
||||
The next example illustrates using `ConversionExtension` for conversion of “ThresholdedRelu” from ONNX according to the formula: `ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))`.
|
||||
Let's take a closer look at the parameters this macro takes:
|
||||
```cpp
|
||||
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
|
||||
```
|
||||
- `framework` - framework name.
|
||||
- `name` - the framework operation name. It's optional if the OpenVINO custom operation name (that is the name that is passed as the first parameter to `OPENVINO_OP` macro) is the same as the framework operation name and both `attributes_map` and `attributes_values` are not provided.
|
||||
- `attributes_map` - used to provide a mapping between OpenVINO operation attribute and framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation attribute name and value is its corresponding framework operation attribute name. This parameter is optional if the number of OpenVINO operation attributes and their names match one-to-one with framework operation attributes.
|
||||
- `attributes_values` - used to provide default values for OpenVINO operation attributes that are not specified in `attributes_map`. Contains key-value pairs, where key is an OpenVINO operation attribute name and the value is this attribute value. This parameter cannot be provided if `attributes_map` contains all of OpenVINO operation attributes or if `attributes_map` is not provided.
|
||||
|
||||
> **NOTE**: `ThresholdedRelu` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of `ThresholdedRelu`.
|
||||
In the example above, `OPENVINO_FRAMEWORK_MAP` is used twice.
|
||||
First, OpenVINO `CustomOp` is mapped to ONNX `CustomOp` operation, `m_mode` attribute is mapped to `mode` attribute, while `m_axis` attribute gets the default value `-1`.
|
||||
Secondly, OpenVINO `CustomOp` is mapped to Tensorflow `CustomOpV3` operation, `m_axis` attribute is mapped to `axis` attribute, while `m_mode` attribute gets the default value `"linear"`.
|
||||
|
||||
@sphinxtabset
|
||||
The last step is to register this custom operation by following:
|
||||
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_add_extension
|
||||
|
||||
@sphinxtab{C++}
|
||||
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU_header
|
||||
@endsphinxtab
|
||||
@sphinxtab{Python}
|
||||
@snippet ov_extensions.py py_frontend_extension_ThresholdedReLU_header
|
||||
@endsphinxtab
|
||||
Mapping to Multiple Operations with ConversionExtension
|
||||
#######################################################
|
||||
|
||||
@endsphinxtabset
|
||||
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
|
||||
|
||||
@sphinxtabset
|
||||
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated ``ConversionExtension`` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
|
||||
|
||||
@sphinxtab{C++}
|
||||
@snippet ov_extensions.cpp frontend_extension_ThresholdedReLU
|
||||
@endsphinxtab
|
||||
@sphinxtab{Python}
|
||||
@snippet ov_extensions.py py_frontend_extension_ThresholdedReLU
|
||||
@endsphinxtab
|
||||
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
|
||||
|
||||
@endsphinxtabset
|
||||
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu” from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
|
||||
|
||||
To access original framework operation attribute value and connect to inputs, `node` object of type `NodeContext` is used. It has two main methods:
|
||||
.. note::
|
||||
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of ``ThresholdedRelu``.
|
||||
|
||||
* `NodeContext::get_input` to get input with a given index,
|
||||
.. tab-set::
|
||||
|
||||
* `NodeContext::get_attribute` to get attribute value with a given name.
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_ThresholdedReLU_header]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_ThresholdedReLU_header]
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_ThresholdedReLU]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_ThresholdedReLU]
|
||||
|
||||
|
||||
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has two main methods:
|
||||
|
||||
* ``NodeContext::get_input`` to get input with a given index,
|
||||
|
||||
* ``NodeContext::get_attribute`` to get attribute value with a given name.
|
||||
|
||||
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
|
||||
@@ -1,28 +1,37 @@
|
||||
# OpenVINO Graph Rewrite Pass {#openvino_docs_Extensibility_UG_graph_rewrite_pass}
|
||||
|
||||
`ov::pass::GraphRewrite` serves for running multiple matcher passes on `ov::Model` in a single graph traversal.
|
||||
@sphinxdirective
|
||||
|
||||
``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` serves for running multiple matcher passes on ``:ref:`ov::Model <doxid-classov_1_1_model>``` in a single graph traversal.
|
||||
Example:
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:graph_rewrite
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:graph_rewrite]
|
||||
|
||||
In addition, GraphRewrite handles nodes that were registered by MatcherPasses during their execution. This nodes will be added to the beginning of the sequence with nodes for pattern matching.
|
||||
|
||||
> **NOTE**: when using `ov::pass::Manager` temporary GraphRewrite is used to execute single MatcherPass.
|
||||
.. note::
|
||||
|
||||
When using ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` temporary GraphRewrite is used to execute single MatcherPass.
|
||||
|
||||
GraphRewrite has two algorithms for MatcherPasses execution. First algorithm is straightforward. It applies each MatcherPass in registration order to current node.
|
||||
|
||||
![graph_rewrite_execution]
|
||||
.. image:: ./_static/images/graph_rewrite_execution.png
|
||||
|
||||
But it is not really efficient when you have a lot of registered passes. So first of all GraphRewrite checks that all MatcherPass patterns has type-based root node (it means that type of this node is not hidden into predicate).
|
||||
And then creates map from registered MatcherPasses. That helps to avoid additional cost of applying each MatcherPass for each node.
|
||||
|
||||
![graph_rewrite_efficient_search]
|
||||
.. image:: ./_static/images/graph_rewrite_efficient_search.png
|
||||
|
||||
> **NOTE**: GraphRewrite execution algorithm cannot be set manually and depends only on root nodes registered inside MatcherPasses.
|
||||
.. note::
|
||||
|
||||
## See Also
|
||||
GraphRewrite execution algorithm cannot be set manually and depends only on root nodes registered inside MatcherPasses.
|
||||
|
||||
* [OpenVINO™ Transformations](./ov_transformations.md)
|
||||
See Also
|
||||
########
|
||||
|
||||
* :doc:`OpenVINO™ Transformations <openvino_docs_transformations>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
[graph_rewrite_execution]: ./img/graph_rewrite_execution.png
|
||||
[graph_rewrite_efficient_search]: ./img/graph_rewrite_efficient_search.png
|
||||
|
||||
@@ -1,13 +1,22 @@
|
||||
# OpenVINO Matcher Pass {#openvino_docs_Extensibility_UG_matcher_pass}
|
||||
|
||||
`ov::pass::MatcherPass` is used for pattern-based transformations.
|
||||
@sphinxdirective
|
||||
|
||||
``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>``` is used for pattern-based transformations.
|
||||
|
||||
Template for MatcherPass transformation class
|
||||
@snippet src/transformations/template_pattern_transformation.hpp graph_rewrite:template_transformation_hpp
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp graph_rewrite:template_transformation_cpp
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.hpp
|
||||
:language: cpp
|
||||
:fragment: [graph_rewrite:template_transformation_hpp]
|
||||
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [graph_rewrite:template_transformation_cpp]
|
||||
|
||||
|
||||
To use ``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>```, you need to complete these steps:
|
||||
|
||||
To use `ov::pass::MatcherPass`, you need to complete these steps:
|
||||
1. Create a pattern
|
||||
2. Implement a callback
|
||||
3. Register the pattern and Matcher
|
||||
@@ -15,87 +24,135 @@ To use `ov::pass::MatcherPass`, you need to complete these steps:
|
||||
|
||||
So let's go through each of these steps.
|
||||
|
||||
## Create a pattern
|
||||
Create a pattern
|
||||
################
|
||||
|
||||
Pattern is a single root `ov::Model`. But the only difference is that you do not need to create a model object, you just need to create and connect opset or special pattern operations.
|
||||
Pattern is a single root ``:ref:`ov::Model <doxid-classov_1_1_model>```. But the only difference is that you do not need to create a model object, you just need to create and connect opset or special pattern operations.
|
||||
Then you need to take the last created operation and put it as a root of the pattern. This root node will be used as a root node in pattern matching.
|
||||
> **NOTE**: Any nodes in a pattern that have no consumers and are not registered as root will not be used in pattern matching.
|
||||
|
||||
@snippet ov_model_snippets.cpp pattern:simple_example
|
||||
.. note::
|
||||
Any nodes in a pattern that have no consumers and are not registered as root will not be used in pattern matching.
|
||||
|
||||
The `Parameter` operation in the example above has type and shape specified. These attributes are needed only to create Parameter operation class and will not be used in pattern matching.
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [pattern:simple_example]
|
||||
|
||||
For more pattern examples, refer to the [pattern matching](#pattern_matching) section.
|
||||
The ``Parameter`` operation in the example above has type and shape specified. These attributes are needed only to create Parameter operation class and will not be used in pattern matching.
|
||||
|
||||
## Implement callback
|
||||
For more pattern examples, refer to the `pattern matching section <#pattern-matching>`__.
|
||||
|
||||
Implement callback
|
||||
##################
|
||||
|
||||
Callback is an action applied to every pattern entrance. In general, callback is the lambda function that takes Matcher object with detected subgraph.
|
||||
|
||||
@snippet ov_model_snippets.cpp pattern:callback_example
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [pattern:callback_example]
|
||||
|
||||
The example above shows the callback structure and how Matcher can be used for accessing nodes detected by pattern.
|
||||
Callback return value is `true` if root node was replaced and another pattern cannot be applied to the same root node; otherwise, it is `false`.
|
||||
> **NOTE**: It is not recommended to manipulate with nodes that are under root node. This may affect GraphRewrite execution as it is expected that all nodes that come after root node in topological order are valid and can be used in pattern matching.
|
||||
Callback return value is ``true`` if root node was replaced and another pattern cannot be applied to the same root node; otherwise, it is ``false``.
|
||||
|
||||
.. note::
|
||||
|
||||
It is not recommended to manipulate with nodes that are under root node. This may affect GraphRewrite execution as it is expected that all nodes that come after root node in topological order are valid and can be used in pattern matching.
|
||||
|
||||
MatcherPass also provides functionality that allows reporting of the newly created nodes that can be used in additional pattern matching.
|
||||
If MatcherPass was registered in `ov::pass::Manager` or `ov::pass::GraphRewrite`, these registered nodes will be added for additional pattern matching.
|
||||
That means that matcher passes registered in `ov::pass::GraphRewrite` will be applied to these nodes.
|
||||
If MatcherPass was registered in ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` or ``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>```, these registered nodes will be added for additional pattern matching.
|
||||
That means that matcher passes registered in ``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` will be applied to these nodes.
|
||||
|
||||
The example below shows how single MatcherPass can fuse sequence of operations using the `register_new_node` method.
|
||||
The example below shows how single MatcherPass can fuse sequence of operations using the ``register_new_node`` method.
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:relu_fusion
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:relu_fusion]
|
||||
|
||||
> **NOTE**: If you register multiple nodes, please add them in topological order. We do not topologically sort these nodes as it is a time-consuming operation.
|
||||
.. note::
|
||||
If you register multiple nodes, please add them in topological order. We do not topologically sort these nodes as it is a time-consuming operation.
|
||||
|
||||
## Register pattern and Matcher
|
||||
Register pattern and Matcher
|
||||
############################
|
||||
|
||||
The last step is to register Matcher and callback inside the MatcherPass pass. To do this, call the `register_matcher` method.
|
||||
> **NOTE**: Only one matcher can be registered for a single MatcherPass class.
|
||||
The last step is to register Matcher and callback inside the MatcherPass pass. To do this, call the ``register_matcher`` method.
|
||||
|
||||
```cpp
|
||||
// Register matcher and callback
|
||||
register_matcher(m, callback);
|
||||
```
|
||||
## Execute MatcherPass
|
||||
.. note::
|
||||
|
||||
Only one matcher can be registered for a single MatcherPass class.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
// Register matcher and callback
|
||||
register_matcher(m, callback);
|
||||
|
||||
|
||||
Execute MatcherPass
|
||||
###################
|
||||
|
||||
MatcherPass has multiple ways to be executed:
|
||||
* Run on a single node - it can be useful if you want to run MatcherPass inside another transformation.
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:run_on_node
|
||||
* Run on `ov::Model` using GraphRewrite - this approach gives ability to run MatcherPass on whole `ov::Model`. Moreover, multiple MatcherPass transformation can be registered in a single GraphRewite to be executed in a single graph traversal.
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:graph_rewrite
|
||||
* Run on `ov::Model` using `ov::pass::Manager` - this approach helps you to register MatcherPass for execution on `ov::Model` as another transformation types.
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager
|
||||
|
||||
## Pattern Matching <a name="pattern_matching"></a>
|
||||
* Run on a single node - it can be useful if you want to run MatcherPass inside another transformation.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:run_on_node]
|
||||
|
||||
* Run on ``:ref:`ov::Model <doxid-classov_1_1_model>``` using GraphRewrite - this approach gives ability to run MatcherPass on whole ``:ref:`ov::Model <doxid-classov_1_1_model>```. Moreover, multiple MatcherPass transformation can be registered in a single GraphRewite to be executed in a single graph traversal.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:graph_rewrite]
|
||||
|
||||
* Run on ``:ref:`ov::Model <doxid-classov_1_1_model>``` using ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` - this approach helps you to register MatcherPass for execution on ``:ref:`ov::Model <doxid-classov_1_1_model>``` as another transformation types.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:manager]
|
||||
|
||||
|
||||
Pattern Matching
|
||||
################
|
||||
|
||||
Sometimes patterns cannot be expressed via regular operations or it is too complicated.
|
||||
For example, if you want to detect **Convolution->Add** sub-graph without specifying particular input type for Convolution operation or you want to create a pattern where some of operations can have different types.
|
||||
And for these cases OpenVINO™ provides additional helpers to construct patterns for GraphRewrite transformations.
|
||||
|
||||
There are two main helpers:
|
||||
1. `ov::pass::pattern::any_input` - helps to express inputs if their types are undefined.
|
||||
2. `ov::pass::pattern::wrap_type<T>` - helps to express nodes of pattern without specifying node attributes.
|
||||
|
||||
1. ``:ref:`ov::pass::pattern::any_input <doxid-namespaceov_1_1pass_1_1pattern_1a8ed84c3eed4610f117ee10d86d500e02>``` - helps to express inputs if their types are undefined.
|
||||
2. ``:ref:`ov::pass::pattern::wrap_type <doxid-namespaceov_1_1pass_1_1pattern_1adfcd6031c95d7bace5f084e2aa105af8>`<T>`` - helps to express nodes of pattern without specifying node attributes.
|
||||
|
||||
Let's go through the example to have better understanding of how it works:
|
||||
|
||||
> **NOTE**: Node attributes do not participate in pattern matching and are needed only for operations creation. Only operation types participate in pattern matching.
|
||||
.. note::
|
||||
Node attributes do not participate in pattern matching and are needed only for operations creation. Only operation types participate in pattern matching.
|
||||
|
||||
The example below shows basic usage of `ov::passpattern::any_input`.
|
||||
The example below shows basic usage of ``ov::passpattern::any_input``.
|
||||
Here we construct Multiply pattern with arbitrary first input and Constant as a second input.
|
||||
Also as Multiply is commutative operation, it does not matter in which order we set inputs (any_input/Constant or Constant/any_input) because both cases will be matched.
|
||||
|
||||
@snippet ov_model_snippets.cpp pattern:label_example
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [pattern:label_example]
|
||||
|
||||
This example shows how we can construct a pattern when operation has arbitrary number of inputs.
|
||||
|
||||
@snippet ov_model_snippets.cpp pattern:concat_example
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [pattern:concat_example]
|
||||
|
||||
This example shows how to use predicate to construct a pattern. Also it shows how to match pattern manually on given node.
|
||||
|
||||
@snippet ov_model_snippets.cpp pattern:predicate_example
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [pattern:predicate_example]
|
||||
|
||||
> **NOTE**: Be careful with manual matching because Matcher object holds matched nodes. To clear a match, use the m->clear_state() method.
|
||||
.. note::
|
||||
|
||||
## See Also
|
||||
Be careful with manual matching because Matcher object holds matched nodes. To clear a match, use the m->clear_state() method.
|
||||
|
||||
* [OpenVINO™ Transformations](./ov_transformations.md)
|
||||
See Also
|
||||
########
|
||||
|
||||
* :doc:`OpenVINO™ Transformations <openvino_docs_transformations>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,17 +1,26 @@
|
||||
# OpenVINO Model Pass {#openvino_docs_Extensibility_UG_model_pass}
|
||||
|
||||
`ov::pass::ModelPass` is used for transformations that take entire `ov::Model` as an input and process it.
|
||||
@sphinxdirective
|
||||
|
||||
``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` is used for transformations that take entire ``:ref:`ov::Model <doxid-classov_1_1_model>``` as an input and process it.
|
||||
|
||||
Template for ModelPass transformation class
|
||||
|
||||
@snippet src/transformations/template_model_transformation.hpp model_pass:template_transformation_hpp
|
||||
.. doxygensnippet:: docs/snippets/template_model_transformation.hpp
|
||||
:language: cpp
|
||||
:fragment: [model_pass:template_transformation_hpp]
|
||||
|
||||
@snippet src/transformations/template_model_transformation.cpp model_pass:template_transformation_cpp
|
||||
.. doxygensnippet:: docs/snippets/template_model_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [model_pass:template_transformation_cpp]
|
||||
|
||||
Using `ov::pass::ModelPass`, you need to override the `run_on_model` method where you will write the transformation code.
|
||||
Return value is `true` if the original model has changed during transformation (new operation was added, or operations replacement was made, or node attributes were changed); otherwise, it is `false`.
|
||||
Also `ov::pass::ModelPass` based transformations can be executed via `ov::pass::Manager`.
|
||||
Using ``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>```, you need to override the ``run_on_model`` method where you will write the transformation code.
|
||||
Return value is ``true`` if the original model has changed during transformation (new operation was added, or operations replacement was made, or node attributes were changed); otherwise, it is ``false``.
|
||||
Also ``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` based transformations can be executed via ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>```.
|
||||
|
||||
## See Also
|
||||
See Also
|
||||
########
|
||||
|
||||
* [OpenVINO™ Transformations](./ov_transformations.md)
|
||||
* :doc:`OpenVINO™ Transformations <openvino_docs_transformations>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -10,164 +10,208 @@
|
||||
openvino_docs_Extensibility_UG_matcher_pass
|
||||
openvino_docs_Extensibility_UG_graph_rewrite_pass
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO Transformation mechanism allows to develop transformation passes to modify `ov::Model`. You can use this mechanism to apply additional optimizations to the original Model or transform unsupported subgraphs and operations to new operations which are supported by the plugin.
|
||||
OpenVINO Transformation mechanism allows to develop transformation passes to modify ``:ref:`ov::Model <doxid-classov_1_1_model>```. You can use this mechanism to apply additional optimizations to the original Model or transform unsupported subgraphs and operations to new operations which are supported by the plugin.
|
||||
This guide contains all necessary information that you need to start implementing OpenVINO™ transformations.
|
||||
|
||||
## Working with Model
|
||||
Working with Model
|
||||
##################
|
||||
|
||||
Before the moving to transformation part it is needed to say several words about functions which allow to modify `ov::Model`.
|
||||
This chapter extends the [model representation guide](../OV_Runtime_UG/model_representation.md) and shows an API that allows us to manipulate with `ov::Model`.
|
||||
Before the moving to transformation part it is needed to say several words about functions which allow to modify ``:ref:`ov::Model <doxid-classov_1_1_model>```.
|
||||
This chapter extends the :doc:`model representation guide <openvino_docs_OV_UG_Model_Representation>` and shows an API that allows us to manipulate with ``:ref:`ov::Model <doxid-classov_1_1_model>```.
|
||||
|
||||
### Working with node input and output ports
|
||||
Working with node input and output ports
|
||||
++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
First of all let's talk about `ov::Node` input/output ports. Each OpenVINO™ operation has input and output ports except cases when operation has `Parameter` or `Constant` type.
|
||||
First of all let's talk about ``:ref:`ov::Node <doxid-classov_1_1_node>``` input/output ports. Each OpenVINO™ operation has input and output ports except cases when operation has ``Parameter`` or ``Constant`` type.
|
||||
|
||||
Every port belongs to its node, so using a port we can access parent node, get shape and type for particular input/output, get all consumers in case of output port, and get producer node in case of input port.
|
||||
With output port we can set inputs for newly created operations.
|
||||
|
||||
Lets look at the code example.
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:ports_example
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:ports_example]
|
||||
|
||||
### Node replacement
|
||||
Node replacement
|
||||
++++++++++++++++
|
||||
|
||||
OpenVINO™ provides two ways for node replacement: via OpenVINO™ helper function and directly via port methods. We are going to review both of them.
|
||||
|
||||
Let's start with OpenVINO™ helper functions. The most popular function is `ov::replace_node(old_node, new_node)`.
|
||||
Let's start with OpenVINO™ helper functions. The most popular function is ``ov::replace_node(old_node, new_node)``.
|
||||
|
||||
We will review real replacement case where Negative operation is replaced with Multiply.
|
||||
|
||||
![ngraph_replace_node]
|
||||
.. image:: ./_static/images/ngraph_replace_node.png
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:replace_node
|
||||
|
||||
`ov::replace_node` has a constraint that number of output ports for both of ops must be the same; otherwise, it raises an exception.
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:replace_node]
|
||||
|
||||
``:ref:`ov::replace_node <doxid-namespaceov_1a75d84ee654edb73fe4fb18936a5dca6d>``` has a constraint that number of output ports for both of ops must be the same; otherwise, it raises an exception.
|
||||
|
||||
The alternative way to do the same replacement is the following:
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:manual_replace
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:manual_replace]
|
||||
|
||||
Another transformation example is insertion.
|
||||
|
||||
![ngraph_insert_node]
|
||||
.. image:: ./_static/images/ngraph_insert_node.png
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:insert_node
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:insert_node]
|
||||
|
||||
The alternative way to the insert operation is to make a node copy and use `ov::replace_node()`:
|
||||
The alternative way to the insert operation is to make a node copy and use ``:ref:`ov::replace_node() <doxid-namespaceov_1a75d84ee654edb73fe4fb18936a5dca6d>```:
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:insert_node_with_copy
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:insert_node_with_copy]
|
||||
|
||||
### Node elimination
|
||||
Node elimination
|
||||
++++++++++++++++
|
||||
|
||||
Another type of node replacement is its elimination.
|
||||
|
||||
To eliminate operation, OpenVINO™ has special method that considers all limitations related to OpenVINO™ Runtime.
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:eliminate_node
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:eliminate_node]
|
||||
|
||||
`ov::replace_output_update_name()` in case of successful replacement it automatically preserves friendly name and runtime info.
|
||||
``:ref:`ov::replace_output_update_name() <doxid-namespaceov_1a75ba2120e573883bd96bb19c887c6a1d>``` in case of successful replacement it automatically preserves friendly name and runtime info.
|
||||
|
||||
## Transformations types <a name="transformations-types"></a>
|
||||
.. _transformations_types:
|
||||
|
||||
Transformations types
|
||||
#####################
|
||||
|
||||
OpenVINO™ Runtime has three main transformation types:
|
||||
|
||||
* [Model pass](./model_pass.md) - straightforward way to work with `ov::Model` directly
|
||||
* [Matcher pass](./matcher_pass.md) - pattern-based transformation approach
|
||||
* [Graph rewrite pass](./graph_rewrite_pass.md) - container for matcher passes needed for efficient execution
|
||||
* :doc:`Model pass <openvino_docs_Extensibility_UG_model_pass>` - straightforward way to work with ``:ref:`ov::Model <doxid-classov_1_1_model>``` directly
|
||||
* :doc:`Matcher pass <openvino_docs_Extensibility_UG_matcher_pass>` - pattern-based transformation approach
|
||||
* :doc:`Graph rewrite pass <openvino_docs_Extensibility_UG_graph_rewrite_pass>` - container for matcher passes needed for efficient execution
|
||||
|
||||
![transformations_structure]
|
||||
.. image:: ./_static/images/transformations_structure.png
|
||||
|
||||
## Transformation conditional compilation
|
||||
Transformation conditional compilation
|
||||
######################################
|
||||
|
||||
Transformation library has two internal macros to support conditional compilation feature.
|
||||
|
||||
* `MATCHER_SCOPE(region)` - allows to disable the MatcherPass if matcher isn't used. The region name should be unique. This macro creates a local variable `matcher_name` which you should use as a matcher name.
|
||||
* `RUN_ON_MODEL_SCOPE(region)` - allows to disable run_on_model pass if it isn't used. The region name should be unique.
|
||||
* ``:ref:`MATCHER_SCOPE(region) <doxid-conditional__compilation_2include_2openvino_2cc_2pass_2itt_8hpp_1a3d1377542bcf3e305c33a1b683cc77df>``` - allows to disable the MatcherPass if matcher isn't used. The region name should be unique. This macro creates a local variable ``matcher_name`` which you should use as a matcher name.
|
||||
* ``:ref:`RUN_ON_MODEL_SCOPE(region) <doxid-conditional__compilation_2include_2openvino_2cc_2pass_2itt_8hpp_1ab308561b849d47b9c820506ec73c4a30>``` - allows to disable run_on_model pass if it isn't used. The region name should be unique.
|
||||
|
||||
## Transformation writing essentials <a name="transformation_writing_essentials"></a>
|
||||
.. _transformation_writing_essentials:
|
||||
|
||||
Transformation writing essentials
|
||||
#################################
|
||||
|
||||
When developing a transformation, you need to follow these transformation rules:
|
||||
|
||||
### 1. Friendly Names
|
||||
1. Friendly Names
|
||||
+++++++++++++++++
|
||||
|
||||
Each `ov::Node` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
|
||||
Each ``:ref:`ov::Node <doxid-classov_1_1_node>``` has an unique name and a friendly name. In transformations we care only about friendly name because it represents the name from the model.
|
||||
To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below.
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:replace_friendly_name
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:replace_friendly_name]
|
||||
|
||||
In more advanced cases, when replaced operation has several outputs and we add additional consumers to its outputs, we make a decision how to set friendly name by arrangement.
|
||||
|
||||
### 2. Runtime Info
|
||||
2. Runtime Info
|
||||
+++++++++++++++
|
||||
|
||||
Runtime info is a map `std::map<std::string, ov::Any>` located inside `ov::Node` class. It represents additional attributes in `ov::Node`.
|
||||
These attributes can be set by users or by plugins and when executing transformation that changes `ov::Model` we need to preserve these attributes as they will not be automatically propagated.
|
||||
Runtime info is a map ``std::map<std::string, :ref:`ov::Any <doxid-classov_1_1_any>`>`` located inside ``:ref:`ov::Node <doxid-classov_1_1_node>``` class. It represents additional attributes in ``:ref:`ov::Node <doxid-classov_1_1_node>```.
|
||||
These attributes can be set by users or by plugins and when executing transformation that changes ``:ref:`ov::Model <doxid-classov_1_1_model>``` we need to preserve these attributes as they will not be automatically propagated.
|
||||
In most cases, transformations have the following types: 1:1 (replace node with another node), 1:N (replace node with a sub-graph), N:1 (fuse sub-graph into a single node), N:M (any other transformation).
|
||||
Currently, there is no mechanism that automatically detects transformation types, so we need to propagate this runtime information manually. See the examples below.
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:copy_runtime_info
|
||||
|
||||
When transformation has multiple fusions or decompositions, `ov::copy_runtime_info` must be called multiple times for each case.
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:copy_runtime_info]
|
||||
|
||||
> **NOTE**: `copy_runtime_info` removes `rt_info` from destination nodes. If you want to keep it, you need to specify them in source nodes like this: `copy_runtime_info({a, b, c}, {a, b})`
|
||||
When transformation has multiple fusions or decompositions, ``:ref:`ov::copy_runtime_info <doxid-namespaceov_1a3bb5969a95703b4b4fd77f6f58837207>``` must be called multiple times for each case.
|
||||
|
||||
### 3. Constant Folding
|
||||
.. note:: ``copy_runtime_info`` removes ``rt_info`` from destination nodes. If you want to keep it, you need to specify them in source nodes like this: ``copy_runtime_info({a, b, c}, {a, b})``
|
||||
|
||||
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use `ov::pass::ConstantFolding()` after your transformation or call constant folding directly for operation.
|
||||
3. Constant Folding
|
||||
+++++++++++++++++++
|
||||
|
||||
If your transformation inserts constant sub-graphs that need to be folded, do not forget to use ``:ref:`ov::pass::ConstantFolding() <doxid-classov_1_1pass_1_1_constant_folding>``` after your transformation or call constant folding directly for operation.
|
||||
The example below shows how constant subgraph can be constructed.
|
||||
|
||||
@snippet ov_model_snippets.cpp ov:constant_subgraph
|
||||
.. doxygensnippet:: docs/snippets/ov_model_snippets.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:constant_subgraph]
|
||||
|
||||
Manual constant folding is more preferable than `ov::pass::ConstantFolding()` because it is much faster.
|
||||
Manual constant folding is more preferable than ``:ref:`ov::pass::ConstantFolding() <doxid-classov_1_1pass_1_1_constant_folding>``` because it is much faster.
|
||||
|
||||
Below you can find an example of manual constant folding:
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp manual_constant_folding
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [manual_constant_folding]
|
||||
|
||||
## Common mistakes in transformations <a name="common_mistakes"></a>
|
||||
.. _common_mistakes:
|
||||
|
||||
Common mistakes in transformations
|
||||
##################################
|
||||
|
||||
In transformation development process:
|
||||
|
||||
* Do not use deprecated OpenVINO™ API. Deprecated methods has the `OPENVINO_DEPRECATED` macros in its definition.
|
||||
* Do not pass `shared_ptr<Node>` as an input for other node if type of node is unknown or it has multiple outputs. Use explicit output port.
|
||||
* If you replace node with another node that produces different shape, remember that new shape will not be propagated until the first `validate_nodes_and_infer_types` call for `ov::Model`. If you are using `ov::pass::Manager`, it will automatically call this method after each transformation execution.
|
||||
* Do not forget to call the `ov::pass::ConstantFolding` pass if your transformation creates constant subgraphs.
|
||||
* Do not use deprecated OpenVINO™ API. Deprecated methods has the ``OPENVINO_DEPRECATED`` macros in its definition.
|
||||
* Do not pass ``shared_ptr<Node>`` as an input for other node if type of node is unknown or it has multiple outputs. Use explicit output port.
|
||||
* If you replace node with another node that produces different shape, remember that new shape will not be propagated until the first ``validate_nodes_and_infer_types`` call for ``:ref:`ov::Model <doxid-classov_1_1_model>```. If you are using ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>```, it will automatically call this method after each transformation execution.
|
||||
* Do not forget to call the ``:ref:`ov::pass::ConstantFolding <doxid-classov_1_1pass_1_1_constant_folding>``` pass if your transformation creates constant subgraphs.
|
||||
* Use latest OpSet if you are not developing downgrade transformation pass.
|
||||
* When developing a callback for `ov::pass::MatcherPass`, do not change nodes that come after the root node in topological order.
|
||||
* When developing a callback for ``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>```, do not change nodes that come after the root node in topological order.
|
||||
|
||||
## Using pass manager <a name="using_pass_manager"></a>
|
||||
.. _using_pass_manager:
|
||||
|
||||
`ov::pass::Manager` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
|
||||
It can register and apply any [transformation pass](#transformations-types) on model.
|
||||
In addition, `ov::pass::Manager` has extended debug capabilities (find more information in the [how to debug transformations](#how-to-debug-transformations) section).
|
||||
Using pass manager
|
||||
##################
|
||||
|
||||
The example below shows basic usage of `ov::pass::Manager`
|
||||
``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` is a container class that can store the list of transformations and execute them. The main idea of this class is to have high-level representation for grouped list of transformations.
|
||||
It can register and apply any `transformation pass <#transformations_types>`__ on model.
|
||||
In addition, ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>``` has extended debug capabilities (find more information in the `how to debug transformations <#how_to_debug_transformations>`__ section).
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager3
|
||||
The example below shows basic usage of ``:ref:`ov::pass::Manager <doxid-classov_1_1pass_1_1_manager>```
|
||||
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:manager3]
|
||||
|
||||
Another example shows how multiple matcher passes can be united into single GraphRewrite.
|
||||
|
||||
@snippet src/transformations/template_pattern_transformation.cpp matcher_pass:manager2
|
||||
.. doxygensnippet:: docs/snippets/template_pattern_transformation.cpp
|
||||
:language: cpp
|
||||
:fragment: [matcher_pass:manager2]
|
||||
|
||||
.. _how_to_debug_transformations:
|
||||
|
||||
## How to debug transformations <a name="how-to-debug-transformations"></a>
|
||||
How to debug transformations
|
||||
############################
|
||||
|
||||
If you are using `ngraph::pass::Manager` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
|
||||
If you are using ``ngraph::pass::Manager`` to run sequence of transformations, you can get additional debug capabilities by using the following environment variables:
|
||||
|
||||
```
|
||||
OV_PROFILE_PASS_ENABLE=1 - enables performance measurement for each transformation and prints execution status
|
||||
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
OV_PROFILE_PASS_ENABLE=1 - enables performance measurement for each transformation and prints execution status
|
||||
OV_ENABLE_VISUALIZE_TRACING=1 - enables visualization after each transformation. By default, it saves dot and svg files.
|
||||
|
||||
> **NOTE**: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
|
||||
|
||||
## See Also
|
||||
.. note:: Make sure that you have dot installed on your machine; otherwise, it will silently save only dot file without svg file.
|
||||
|
||||
* [OpenVINO™ Model Representation](../OV_Runtime_UG/model_representation.md)
|
||||
* [OpenVINO™ Extensions](./Intro.md)
|
||||
See Also
|
||||
########
|
||||
|
||||
[ngraph_replace_node]: ./img/ngraph_replace_node.png
|
||||
[ngraph_insert_node]: ./img/ngraph_insert_node.png
|
||||
[transformations_structure]: ./img/transformations_structure.png
|
||||
[register_new_node]: ./img/register_new_node.png
|
||||
* :doc:`OpenVINO™ Model Representation <openvino_docs_OV_UG_Model_Representation>`
|
||||
* :doc:`OpenVINO™ Extensions <openvino_docs_Extensibility_UG_Intro>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,49 +1,45 @@
|
||||
# Asynchronous Inference Request {#openvino_docs_ie_plugin_dg_async_infer_request}
|
||||
# Asynchronous Inference Request {#openvino_docs_ov_plugin_dg_async_infer_request}
|
||||
|
||||
Asynchronous Inference Request runs an inference pipeline asynchronously in one or several task executors depending on a device pipeline structure.
|
||||
OpenVINO Runtime Plugin API provides the base InferenceEngine::AsyncInferRequestThreadSafeDefault class:
|
||||
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class:
|
||||
|
||||
- The class has the `_pipeline` field of `std::vector<std::pair<ITaskExecutor::Ptr, Task> >`, which contains pairs of an executor and executed task.
|
||||
- The class has the `m_pipeline` field of `std::vector<std::pair<std::shared_ptr<ov::threading::ITaskExecutor>, ov::threading::Task> >`, which contains pairs of an executor and executed task.
|
||||
- All executors are passed as arguments to a class constructor and they are in the running state and ready to run tasks.
|
||||
- The class has the InferenceEngine::AsyncInferRequestThreadSafeDefault::StopAndWait method, which waits for `_pipeline` to finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the executable network instance and are not destroyed.
|
||||
- The class has the ov::IAsyncInferRequest::stop_and_wait method, which waits for `m_pipeline` to finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the compiled model instance and are not destroyed.
|
||||
|
||||
`AsyncInferRequest` Class
|
||||
AsyncInferRequest Class
|
||||
------------------------
|
||||
|
||||
OpenVINO Runtime Plugin API provides the base InferenceEngine::AsyncInferRequestThreadSafeDefault class for a custom asynchronous inference request implementation:
|
||||
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class for a custom asynchronous inference request implementation:
|
||||
|
||||
@snippet src/template_async_infer_request.hpp async_infer_request:header
|
||||
@snippet src/async_infer_request.hpp async_infer_request:header
|
||||
|
||||
#### Class Fields
|
||||
### Class Fields
|
||||
|
||||
- `_inferRequest` - a reference to the [synchronous inference request](@ref openvino_docs_ie_plugin_dg_infer_request) implementation. Its methods are reused in the `AsyncInferRequest` constructor to define a device pipeline.
|
||||
- `_waitExecutor` - a task executor that waits for a response from a device about device tasks completion
|
||||
- `m_wait_executor` - a task executor that waits for a response from a device about device tasks completion
|
||||
|
||||
> **NOTE**: If a plugin can work with several instances of a device, `_waitExecutor` must be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel.
|
||||
> **NOTE**: If a plugin can work with several instances of a device, `m_wait_executor` must be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel.
|
||||
|
||||
### `AsyncInferRequest()`
|
||||
### AsyncInferRequest()
|
||||
|
||||
The main goal of the `AsyncInferRequest` constructor is to define a device pipeline `_pipeline`. The example below demonstrates `_pipeline` creation with the following stages:
|
||||
The main goal of the `AsyncInferRequest` constructor is to define a device pipeline `m_pipeline`. The example below demonstrates `m_pipeline` creation with the following stages:
|
||||
|
||||
- `inferPreprocess` is a CPU compute task.
|
||||
- `startPipeline` is a CPU ligthweight task to submit tasks to a remote device.
|
||||
- `waitPipeline` is a CPU non-compute task that waits for a response from a remote device.
|
||||
- `inferPostprocess` is a CPU compute task.
|
||||
- `infer_preprocess_and_start_pipeline` is a CPU ligthweight task to submit tasks to a remote device.
|
||||
- `wait_pipeline` is a CPU non-compute task that waits for a response from a remote device.
|
||||
- `infer_postprocess` is a CPU compute task.
|
||||
|
||||
@snippet src/template_async_infer_request.cpp async_infer_request:ctor
|
||||
@snippet src/async_infer_request.cpp async_infer_request:ctor
|
||||
|
||||
The stages are distributed among two task executors in the following way:
|
||||
|
||||
- `inferPreprocess` and `startPipeline` are combined into a single task and run on `_requestExecutor`, which computes CPU tasks.
|
||||
- `infer_preprocess_and_start_pipeline` prepare input tensors and run on `m_request_executor`, which computes CPU tasks.
|
||||
- You need at least two executors to overlap compute tasks of a CPU and a remote device the plugin works with. Otherwise, CPU and device tasks are executed serially one by one.
|
||||
- `waitPipeline` is sent to `_waitExecutor`, which works with the device.
|
||||
- `wait_pipeline` is sent to `m_wait_executor`, which works with the device.
|
||||
|
||||
> **NOTE**: `callbackExecutor` is also passed to the constructor and it is used in the base InferenceEngine::AsyncInferRequestThreadSafeDefault class, which adds a pair of `callbackExecutor` and a callback function set by the user to the end of the pipeline.
|
||||
> **NOTE**: `m_callback_executor` is also passed to the constructor and it is used in the base ov::IAsyncInferRequest class, which adds a pair of `callback_executor` and a callback function set by the user to the end of the pipeline.
|
||||
|
||||
Inference request stages are also profiled using IE_PROFILING_AUTO_SCOPE, which shows how pipelines of multiple asynchronous inference requests are run in parallel via the [Intel® VTune™ Profiler](https://software.intel.com/en-us/vtune) tool.
|
||||
### ~AsyncInferRequest()
|
||||
|
||||
### `~AsyncInferRequest()`
|
||||
In the asynchronous request destructor, it is necessary to wait for a pipeline to finish. It can be done using the ov::IAsyncInferRequest::stop_and_wait method of the base class.
|
||||
|
||||
In the asynchronous request destructor, it is necessary to wait for a pipeline to finish. It can be done using the InferenceEngine::AsyncInferRequestThreadSafeDefault::StopAndWait method of the base class.
|
||||
|
||||
@snippet src/template_async_infer_request.cpp async_infer_request:dtor
|
||||
@snippet src/async_infer_request.cpp async_infer_request:dtor
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Build Plugin Using CMake {#openvino_docs_ie_plugin_dg_plugin_build}
|
||||
# Build Plugin Using CMake {#openvino_docs_ov_plugin_dg_plugin_build}
|
||||
|
||||
OpenVINO build infrastructure provides the OpenVINO Developer Package for plugin development.
|
||||
|
||||
@@ -18,28 +18,21 @@ Once the commands above are executed, the OpenVINO Developer Package is generate
|
||||
- `OpenVINODeveloperPackageConfig-version.cmake` - a file with a package version.
|
||||
- `targets_developer.cmake` - an automatically generated file which contains all targets exported from the OpenVINO build tree. This file is included by `OpenVINODeveloperPackageConfig.cmake` to import the following targets:
|
||||
- Libraries for plugin development:
|
||||
* `openvino::ngraph` - shared OpenVINO library
|
||||
* `openvino::openvino_gapi_preproc` - shared library with OpenVINO preprocessing plugin
|
||||
* `openvino::core::dev` - interface library with OpenVINO Core development headers
|
||||
* `openvino::runtime::dev` - interface library with OpenVINO Plugin API headers
|
||||
* `openvino::runtime` - shared OpenVINO library
|
||||
* `openvino::runtime::dev` - interface library with OpenVINO Developer API
|
||||
* `openvino::pugixml` - static Pugixml library
|
||||
* `openvino::xbyak` - interface library with Xbyak headers
|
||||
* `openvino::itt` - static library with tools for performance measurement using Intel ITT
|
||||
- Libraries for tests development:
|
||||
* `IE::gtest`, `IE::gtest_main`, `IE::gmock` - Google Tests framework libraries
|
||||
* `IE::commonTestUtils` - static library with common tests utilities
|
||||
* `IE::funcTestUtils` - static library with functional tests utilities
|
||||
* `IE::unitTestUtils` - static library with unit tests utilities
|
||||
* `IE::ngraphFunctions` - static library with the set of `ngraph::Function` builders
|
||||
* `IE::funcSharedTests` - static library with common functional tests
|
||||
* `openvino::gtest`, `openvino::gtest_main`, `openvino::gmock` - Google Tests framework libraries
|
||||
* `openvino::commonTestUtils` - static library with common tests utilities
|
||||
* `openvino::funcTestUtils` - static library with functional tests utilities
|
||||
* `openvino::unitTestUtils` - static library with unit tests utilities
|
||||
* `openvino::ngraphFunctions` - static library with the set of `ov::Model` builders
|
||||
* `openvino::funcSharedTests` - static library with common functional tests
|
||||
* `openvino::ngraph_reference` - static library with operation reference implementations.
|
||||
|
||||
> **NOTE**: it's enough just to run `cmake --build . --target ie_dev_targets` command to build only targets from the
|
||||
> **NOTE**: it's enough just to run `cmake --build . --target ov_dev_targets` command to build only targets from the
|
||||
> OpenVINO Developer package.
|
||||
|
||||
Build Plugin using OpenVINO Developer Package
|
||||
@@ -61,31 +54,7 @@ A common plugin consists of the following components:
|
||||
To build a plugin and its tests, run the following CMake scripts:
|
||||
|
||||
- Root `CMakeLists.txt`, which finds the OpenVINO Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
|
||||
|
||||
```cmake
|
||||
cmake_minimum_required(VERSION 3.13)
|
||||
|
||||
project(OpenVINOTemplatePlugin)
|
||||
|
||||
set(TEMPLATE_PLUGIN_SOURCE_DIR ${OpenVINOTemplatePlugin_SOURCE_DIR})
|
||||
|
||||
find_package(OpenVINODeveloperPackage REQUIRED)
|
||||
|
||||
if(CMAKE_COMPILER_IS_GNUCXX)
|
||||
ov_add_compiler_flags(-Wall)
|
||||
endif()
|
||||
|
||||
add_subdirectory(src)
|
||||
|
||||
if(ENABLE_TESTS)
|
||||
include(CTest)
|
||||
enable_testing()
|
||||
|
||||
if(ENABLE_FUNCTIONAL_TESTS)
|
||||
add_subdirectory(tests/functional)
|
||||
endif()
|
||||
endif()
|
||||
```
|
||||
@snippet template/CMakeLists.txt cmake:main
|
||||
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the OpenVINO Developer Package and they are the same as for the main OpenVINO build tree. You can override them during plugin build using the command below:
|
||||
```bash
|
||||
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
|
||||
@@ -93,7 +62,7 @@ $ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino
|
||||
|
||||
- `src/CMakeLists.txt` to build a plugin shared library from sources:
|
||||
@snippet template/src/CMakeLists.txt cmake:plugin
|
||||
> **NOTE**: `openvino::runtime` target is imported from the OpenVINO Developer Package.
|
||||
> **NOTE**: `openvino::...` targets are imported from the OpenVINO Developer Package.
|
||||
|
||||
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
|
||||
@snippet template/tests/functional/CMakeLists.txt cmake:functional_tests
|
||||
|
||||
89
docs/IE_PLUGIN_DG/CompiledModel.md
Normal file
89
docs/IE_PLUGIN_DG/CompiledModel.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# Compiled Model {#openvino_docs_ov_plugin_dg_compiled_model}
|
||||
|
||||
ov::CompiledModel class functionality:
|
||||
- Compile an ov::Model instance to a backend specific graph representation
|
||||
- Create an arbitrary number of ov::InferRequest objects
|
||||
- Hold some common resources shared between different instances of ov::InferRequest. For example:
|
||||
- ov::ICompiledModel::m_task_executor task executor to implement asynchronous execution
|
||||
- ov::ICompiledModel::m_callback_executor task executor to run an asynchronous inference request callback in a separate thread
|
||||
|
||||
CompiledModel Class
|
||||
------------------------
|
||||
|
||||
OpenVINO Plugin API provides the interface ov::ICompiledModel which should be used as a base class for a compiled model. Based on that, a declaration of an compiled model class can look as follows:
|
||||
|
||||
@snippet src/compiled_model.hpp compiled_model:header
|
||||
|
||||
### Class Fields
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `m_request_id` - Tracks a number of created inference requests, which is used to distinguish different inference requests during profiling via the Intel® Instrumentation and Tracing Technology (ITT) library.
|
||||
- `m_cfg` - Defines a configuration a compiled model was compiled with.
|
||||
- `m_model` - Keeps a reference to transformed `ov::Model` which is used in OpenVINO reference backend computations. Note, in case of other backends with backend specific graph representation `m_model` has different type and represents backend specific graph or just a set of computational kernels to perform an inference.
|
||||
- `m_loaded_from_cache` - Allows to understand that model was loaded from cache.
|
||||
|
||||
### CompiledModel Constructor
|
||||
|
||||
This constructor accepts a generic representation of a model as an ov::Model and is compiled into a backend specific device graph:
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:ctor
|
||||
|
||||
The implementation `compile_model()` is fully device-specific.
|
||||
|
||||
### compile_model()
|
||||
|
||||
The function accepts a const shared pointer to `ov::Model` object and applies OpenVINO passes using `transform_model()` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:compile_model
|
||||
|
||||
> **NOTE**: After all these steps, the backend specific graph is ready to create inference requests and perform inference.
|
||||
|
||||
### export_model()
|
||||
|
||||
The implementation of the method should write all data to the `model_stream`, which is required to import a backend specific graph later in the `Plugin::import_model` method:
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:export_model
|
||||
|
||||
### create_sync_infer_request()
|
||||
|
||||
The method creates an synchronous inference request and returns it.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:create_sync_infer_request
|
||||
|
||||
While the public OpenVINO API has a single interface for inference request, which can be executed in synchronous and asynchronous modes, a plugin library implementation has two separate classes:
|
||||
|
||||
- [Synchronous inference request](@ref openvino_docs_ov_plugin_dg_infer_request), which defines pipeline stages and runs them synchronously in the `infer` method.
|
||||
- [Asynchronous inference request](@ref openvino_docs_ov_plugin_dg_async_infer_request), which is a wrapper for a synchronous inference request and can run a pipeline asynchronously. Depending on a device pipeline structure, it can has one or several stages:
|
||||
- For single-stage pipelines, there is no need to define this method and create a class derived from ov::IAsyncInferRequest. For single stage pipelines, a default implementation of this method creates ov::IAsyncInferRequest wrapping a synchronous inference request and runs it asynchronously in the `m_request_executor` executor.
|
||||
- For pipelines with multiple stages, such as performing some preprocessing on host, uploading input data to a device, running inference on a device, or downloading and postprocessing output data, schedule stages on several task executors to achieve better device use and performance. You can do it by creating a sufficient number of inference requests running in parallel. In this case, device stages of different inference requests are overlapped with preprocessing and postprocessing stage giving better performance.
|
||||
> **IMPORTANT**: It is up to you to decide how many task executors you need to optimally execute a device pipeline.
|
||||
|
||||
|
||||
### create_infer_request()
|
||||
|
||||
The method creates an asynchronous inference request and returns it.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:create_infer_request
|
||||
|
||||
### get_property()
|
||||
|
||||
Returns a current value for a property with the name `name`. The method extracts configuration values a compiled model is compiled with.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:get_property
|
||||
|
||||
This function is the only way to get configuration values when a model is imported and compiled by other developers and tools.
|
||||
|
||||
### set_property()
|
||||
|
||||
The methods allows to set compiled model specific properties.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:set_property
|
||||
|
||||
### get_runtime_model()
|
||||
|
||||
The methods returns the runtime model with backend specific information.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:get_runtime_model
|
||||
|
||||
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) class.
|
||||
@@ -1,90 +0,0 @@
|
||||
# Executable Network {#openvino_docs_ie_plugin_dg_executable_network}
|
||||
|
||||
`ExecutableNetwork` class functionality:
|
||||
- Compile an InferenceEngine::ICNNNetwork instance to a backend specific graph representation
|
||||
- Create an arbitrary number of `InferRequest` objects
|
||||
- Hold some common resources shared between different instances of `InferRequest`. For example:
|
||||
- InferenceEngine::IExecutableNetworkInternal::_taskExecutor task executor to implement asynchronous execution
|
||||
- InferenceEngine::IExecutableNetworkInternal::_callbackExecutor task executor to run an asynchronous inference request callback in a separate thread
|
||||
|
||||
`ExecutableNetwork` Class
|
||||
------------------------
|
||||
|
||||
Inference Engine Plugin API provides the helper InferenceEngine::ExecutableNetworkThreadSafeDefault class recommended to use as a base class for an executable network. Based on that, a declaration of an executable network class can look as follows:
|
||||
|
||||
@snippet src/compiled_model.hpp executable_network:header
|
||||
|
||||
#### Class Fields
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `_requestId` - Tracks a number of created inference requests, which is used to distinguish different inference requests during profiling via the Intel® Instrumentation and Tracing Technology (ITT) library.
|
||||
- `_cfg` - Defines a configuration an executable network was compiled with.
|
||||
- `_plugin` - Refers to a plugin instance.
|
||||
- `_function` - Keeps a reference to transformed `ngraph::Function` which is used in ngraph reference backend computations. Note, in case of other backends with backend specific graph representation `_function` has different type and represents backend specific graph or just a set of computational kernels to perform an inference.
|
||||
- `_inputIndex` - maps a name of input with its index among all network inputs.
|
||||
- `_outputIndex` - maps a name of output with its index among all network outputs.
|
||||
|
||||
### `ExecutableNetwork` Constructor with `ICNNNetwork`
|
||||
|
||||
This constructor accepts a generic representation of a neural network as an InferenceEngine::ICNNNetwork reference and is compiled into a backend specific device graph:
|
||||
|
||||
@snippet src/compiled_model.cpp executable_network:ctor_cnnnetwork
|
||||
|
||||
The implementation `CompileNetwork` is fully device-specific.
|
||||
|
||||
### `CompileNetwork()`
|
||||
|
||||
The function accepts a const shared pointer to `ngraph::Function` object and performs the following steps:
|
||||
|
||||
1. Applies nGraph passes using `TransformNetwork` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
|
||||
2. Maps the transformed graph to a backend specific graph representation (for example, to CPU plugin internal graph representation).
|
||||
3. Allocates and fills memory for graph weights, backend specific memory handles and so on.
|
||||
|
||||
@snippet src/compiled_model.cpp executable_network:map_graph
|
||||
|
||||
> **NOTE**: After all these steps, the backend specific graph is ready to create inference requests and perform inference.
|
||||
|
||||
### `ExecutableNetwork` Constructor Importing from Stream
|
||||
|
||||
This constructor creates a backend specific graph by importing from a stream object:
|
||||
|
||||
> **NOTE**: The export of backend specific graph is done in the `Export` method, and data formats must be the same for both import and export.
|
||||
|
||||
### `Export()`
|
||||
|
||||
The implementation of the method should write all data to the `model` stream, which is required to import a backend specific graph later in the `Plugin::Import` method:
|
||||
|
||||
@snippet src/compiled_model.cpp executable_network:export
|
||||
|
||||
### `CreateInferRequest()`
|
||||
|
||||
The method creates an asynchronous inference request and returns it. While the public Inference Engine API has a single interface for inference request, which can be executed in synchronous and asynchronous modes, a plugin library implementation has two separate classes:
|
||||
|
||||
- [Synchronous inference request](@ref openvino_docs_ie_plugin_dg_infer_request), which defines pipeline stages and runs them synchronously in the `Infer` method.
|
||||
- [Asynchronous inference request](@ref openvino_docs_ie_plugin_dg_async_infer_request), which is a wrapper for a synchronous inference request and can run a pipeline asynchronously. Depending on a device pipeline structure, it can has one or several stages:
|
||||
- For single-stage pipelines, there is no need to define this method and create a class derived from InferenceEngine::AsyncInferRequestThreadSafeDefault. For single stage pipelines, a default implementation of this method creates InferenceEngine::AsyncInferRequestThreadSafeDefault wrapping a synchronous inference request and runs it asynchronously in the `_taskExecutor` executor.
|
||||
- For pipelines with multiple stages, such as performing some preprocessing on host, uploading input data to a device, running inference on a device, or downloading and postprocessing output data, schedule stages on several task executors to achieve better device use and performance. You can do it by creating a sufficient number of inference requests running in parallel. In this case, device stages of different inference requests are overlapped with preprocessing and postprocessing stage giving better performance.
|
||||
> **IMPORTANT**: It is up to you to decide how many task executors you need to optimally execute a device pipeline.
|
||||
|
||||
@snippet src/compiled_model.cpp executable_network:create_infer_request
|
||||
|
||||
### `GetMetric()`
|
||||
|
||||
Returns a metric value for a metric with the name `name`. A metric is a static type of information about an executable network. Examples of metrics:
|
||||
|
||||
- EXEC_NETWORK_METRIC_KEY(NETWORK_NAME) - name of an executable network
|
||||
- EXEC_NETWORK_METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS) - heuristic to denote an optimal (or at least sub-optimal) number of inference requests needed to run asynchronously to use the current device fully
|
||||
- Any other executable network metric specific for a particular device. Such metrics and possible values must be declared in a plugin configuration public header, for example, `template/config.hpp`
|
||||
|
||||
The IE_SET_METRIC_RETURN helper macro sets metric value and checks that the actual metric type matches a type of the specified value.
|
||||
|
||||
### `GetConfig()`
|
||||
|
||||
Returns a current value for a configuration key with the name `name`. The method extracts configuration values an executable network is compiled with.
|
||||
|
||||
@snippet src/compiled_model.cpp executable_network:get_config
|
||||
|
||||
This function is the only way to get configuration values when a network is imported and compiled by other developers and tools (for example, the [Compile tool](@ref openvino_inference_engine_tools_compile_tool_README).
|
||||
|
||||
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class.
|
||||
@@ -1,83 +1,92 @@
|
||||
# Synchronous Inference Request {#openvino_docs_ie_plugin_dg_infer_request}
|
||||
# Synchronous Inference Request {#openvino_docs_ov_plugin_dg_infer_request}
|
||||
|
||||
`InferRequest` class functionality:
|
||||
- Allocate input and output blobs needed for a backend-dependent network inference.
|
||||
- Define functions for inference process stages (for example, `preprocess`, `upload`, `infer`, `download`, `postprocess`). These functions can later be used to define an execution pipeline during [Asynchronous Inference Request](@ref openvino_docs_ie_plugin_dg_async_infer_request) implementation.
|
||||
- Allocate input and output tensors needed for a backend-dependent network inference.
|
||||
- Define functions for inference process stages (for example, `preprocess`, `upload`, `infer`, `download`, `postprocess`). These functions can later be used to define an execution pipeline during [Asynchronous Inference Request](@ref openvino_docs_ov_plugin_dg_async_infer_request) implementation.
|
||||
- Call inference stages one by one synchronously.
|
||||
|
||||
`InferRequest` Class
|
||||
InferRequest Class
|
||||
------------------------
|
||||
|
||||
Inference Engine Plugin API provides the helper InferenceEngine::IInferRequestInternal class recommended
|
||||
to use as a base class for a synchronous inference request implementation. Based of that, a declaration
|
||||
OpenVINO Plugin API provides the interface ov::ISyncInferRequest which should be
|
||||
used as a base class for a synchronous inference request implementation. Based of that, a declaration
|
||||
of a synchronous request class can look as follows:
|
||||
|
||||
@snippet src/template_infer_request.hpp infer_request:header
|
||||
@snippet src/sync_infer_request.hpp infer_request:header
|
||||
|
||||
#### Class Fields
|
||||
### Class Fields
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `_executableNetwork` - reference to an executable network instance. From this reference, an inference request instance can take a task executor, use counter for a number of created inference requests, and so on.
|
||||
- `_profilingTask` - array of the `std::array<InferenceEngine::ProfilingTask, numOfStages>` type. Defines names for pipeline stages. Used to profile an inference pipeline execution with the Intel® instrumentation and tracing technology (ITT).
|
||||
- `_durations` - array of durations of each pipeline stage.
|
||||
- `_networkInputBlobs` - input blob map.
|
||||
- `_networkOutputBlobs` - output blob map.
|
||||
- `_parameters` - `ngraph::Function` parameter operations.
|
||||
- `_results` - `ngraph::Function` result operations.
|
||||
- `m_profiling_task` - array of the `std::array<openvino::itt::handle_t, numOfStages>` type. Defines names for pipeline stages. Used to profile an inference pipeline execution with the Intel® instrumentation and tracing technology (ITT).
|
||||
- `m_durations` - array of durations of each pipeline stage.
|
||||
- backend specific fields:
|
||||
- `_inputTensors` - inputs tensors which wrap `_networkInputBlobs` blobs. They are used as inputs to backend `_executable` computational graph.
|
||||
- `_outputTensors` - output tensors which wrap `_networkOutputBlobs` blobs. They are used as outputs from backend `_executable` computational graph.
|
||||
- `_executable` - an executable object / backend computational graph.
|
||||
- `m_backend_input_tensors` - input backend tensors.
|
||||
- `m_backend_output_tensors` - output backend tensors.
|
||||
- `m_executable` - an executable object / backend computational graph.
|
||||
- `m_eval_context` - an evaluation context to save backend states after the inference.
|
||||
- `m_variable_states` - a vector of variable states.
|
||||
|
||||
### `InferRequest` Constructor
|
||||
### InferRequest Constructor
|
||||
|
||||
The constructor initializes helper fields and calls methods which allocate blobs:
|
||||
The constructor initializes helper fields and calls methods which allocate tensors:
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:ctor
|
||||
@snippet src/sync_infer_request.cpp infer_request:ctor
|
||||
|
||||
> **NOTE**: Call InferenceEngine::CNNNetwork::getInputsInfo and InferenceEngine::CNNNetwork::getOutputsInfo to specify both layout and precision of blobs, which you can set with InferenceEngine::InferRequest::SetBlob and get with InferenceEngine::InferRequest::GetBlob. A plugin uses these hints to determine its internal layouts and precisions for input and output blobs if needed.
|
||||
> **NOTE**: Use inputs/outputs information from the compiled model to understand shape and element type of tensors, which you can set with ov::InferRequest::set_tensor and get with ov::InferRequest::get_tensor. A plugin uses these hints to determine its internal layouts and element types for input and output tensors if needed.
|
||||
|
||||
### `~InferRequest` Destructor
|
||||
### ~InferRequest Destructor
|
||||
|
||||
Decrements a number of created inference requests:
|
||||
Destructor can contain plugin specific logic to finish and destroy infer request.
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:dtor
|
||||
@snippet src/sync_infer_request.cpp infer_request:dtor
|
||||
|
||||
### `InferImpl()`
|
||||
### set_tensors_impl()
|
||||
|
||||
**Implementation details:** Base IInferRequestInternal class implements the public InferenceEngine::IInferRequestInternal::Infer method as following:
|
||||
- Checks blobs set by users
|
||||
- Calls the `InferImpl` method defined in a derived class to call actual pipeline stages synchronously
|
||||
The method allows to set batched tensors in case if the plugin supports it.
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:infer_impl
|
||||
@snippet src/sync_infer_request.cpp infer_request:set_tensors_impl
|
||||
|
||||
#### 1. `inferPreprocess`
|
||||
### query_state()
|
||||
|
||||
Below is the code of the `inferPreprocess` method to demonstrate Inference Engine common preprocessing step handling:
|
||||
The method returns variable states from the model.
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:infer_preprocess
|
||||
@snippet src/sync_infer_request.cpp infer_request:query_state
|
||||
|
||||
**Details:**
|
||||
* `InferImpl` must call the InferenceEngine::IInferRequestInternal::execDataPreprocessing function, which executes common Inference Engine preprocessing step (for example, applies resize or color conversion operations) if it is set by the user. The output dimensions, layout and precision matches the input information set via InferenceEngine::CNNNetwork::getInputsInfo.
|
||||
* If `inputBlob` passed by user differs in terms of precisions from precision expected by plugin, `blobCopy` is performed which does actual precision conversion.
|
||||
### infer()
|
||||
|
||||
#### 2. `startPipeline`
|
||||
The method calls actual pipeline stages synchronously. Inside the method plugin should check input/output tensors, move external tensors to backend and run the inference.
|
||||
|
||||
Executes a pipeline synchronously using `_executable` object:
|
||||
@snippet src/sync_infer_request.cpp infer_request:infer
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:start_pipeline
|
||||
#### 1. infer_preprocess()
|
||||
|
||||
#### 3. `inferPostprocess`
|
||||
Below is the code of the `infer_preprocess()` method. The method checks user input/output tensors and demonstrates conversion from user tensor to backend specific representation:
|
||||
|
||||
Converts output blobs if precisions of backend output blobs and blobs passed by user are different:
|
||||
@snippet src/sync_infer_request.cpp infer_request:infer_preprocess
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:infer_postprocess
|
||||
#### 2. start_pipeline()
|
||||
|
||||
### `GetPerformanceCounts()`
|
||||
Executes a pipeline synchronously using `m_executable` object:
|
||||
|
||||
The method sets performance counters which were measured during pipeline stages execution:
|
||||
@snippet src/sync_infer_request.cpp infer_request:start_pipeline
|
||||
|
||||
@snippet src/template_infer_request.cpp infer_request:get_performance_counts
|
||||
#### 3. wait_pipeline()
|
||||
|
||||
The next step in the plugin library implementation is the [Asynchronous Inference Request](@ref openvino_docs_ie_plugin_dg_async_infer_request) class.
|
||||
Waits a pipeline in case of plugin asynchronous execution:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:wait_pipeline
|
||||
|
||||
#### 4. infer_postprocess()
|
||||
|
||||
Converts backend specific tensors to tensors passed by user:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:infer_postprocess
|
||||
|
||||
### get_profiling_info()
|
||||
|
||||
The method returns the profiling info which was measured during pipeline stages execution:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:get_profiling_info
|
||||
|
||||
The next step in the plugin library implementation is the [Asynchronous Inference Request](@ref openvino_docs_ov_plugin_dg_async_infer_request) class.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Overview of Inference Engine Plugin Library {#openvino_docs_ie_plugin_dg_overview}
|
||||
# Overview of OpenVINO Plugin Library {#openvino_docs_ie_plugin_dg_overview}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -7,62 +7,67 @@
|
||||
:caption: Converting and Preparing Models
|
||||
:hidden:
|
||||
|
||||
Implement Plugin Functionality <openvino_docs_ie_plugin_dg_plugin>
|
||||
Implement Executable Network Functionality <openvino_docs_ie_plugin_dg_executable_network>
|
||||
Implement Synchronous Inference Request <openvino_docs_ie_plugin_dg_infer_request>
|
||||
Implement Asynchronous Inference Request <openvino_docs_ie_plugin_dg_async_infer_request>
|
||||
openvino_docs_ie_plugin_dg_plugin_build
|
||||
openvino_docs_ie_plugin_dg_plugin_testing
|
||||
Implement Plugin Functionality <openvino_docs_ov_plugin_dg_plugin>
|
||||
Implement Compiled Model Functionality <openvino_docs_ov_plugin_dg_compiled_model>
|
||||
Implement Synchronous Inference Request <openvino_docs_ov_plugin_dg_infer_request>
|
||||
Implement Asynchronous Inference Request <openvino_docs_ov_plugin_dg_async_infer_request>
|
||||
Provide Plugin Specific Properties <openvino_docs_ov_plugin_dg_properties>
|
||||
Implement Remote Context <openvino_docs_ov_plugin_dg_remote_context>
|
||||
Implement Remote Tensor <openvino_docs_ov_plugin_dg_remote_tensor>
|
||||
openvino_docs_ov_plugin_dg_plugin_build
|
||||
openvino_docs_ov_plugin_dg_plugin_testing
|
||||
openvino_docs_ie_plugin_detailed_guides
|
||||
openvino_docs_ie_plugin_api_references
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The plugin architecture of the Inference Engine allows to develop and plug independent inference
|
||||
The plugin architecture of the OpenVINO allows to develop and plug independent inference
|
||||
solutions dedicated to different devices. Physically, a plugin is represented as a dynamic library
|
||||
exporting the single `CreatePluginEngine` function that allows to create a new plugin instance.
|
||||
|
||||
Inference Engine Plugin Library
|
||||
OpenVINO Plugin Library
|
||||
-----------------------
|
||||
|
||||
Inference Engine plugin dynamic library consists of several main components:
|
||||
OpenVINO plugin dynamic library consists of several main components:
|
||||
|
||||
1. [Plugin class](@ref openvino_docs_ie_plugin_dg_plugin):
|
||||
- Provides information about devices of a specific type.
|
||||
- Can create an [executable network](@ref openvino_docs_ie_plugin_dg_executable_network) instance which represents a Neural
|
||||
Network backend specific graph structure for a particular device in opposite to the InferenceEngine::ICNNNetwork
|
||||
interface which is backend-independent.
|
||||
- Can import an already compiled graph structure from an input stream to an
|
||||
[executable network](@ref openvino_docs_ie_plugin_dg_executable_network) object.
|
||||
2. [Executable Network class](@ref openvino_docs_ie_plugin_dg_executable_network):
|
||||
- Is an execution configuration compiled for a particular device and takes into account its capabilities.
|
||||
- Holds a reference to a particular device and a task executor for this device.
|
||||
- Can create several instances of [Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request).
|
||||
- Can export an internal backend specific graph structure to an output stream.
|
||||
3. [Inference Request class](@ref openvino_docs_ie_plugin_dg_infer_request):
|
||||
1. [Plugin class](@ref openvino_docs_ov_plugin_dg_plugin):
|
||||
- Provides information about devices of a specific type.
|
||||
- Can create an [compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) instance which represents a Neural Network backend specific graph structure for a particular device in opposite to the ov::Model
|
||||
which is backend-independent.
|
||||
- Can import an already compiled graph structure from an input stream to an
|
||||
[compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) object.
|
||||
2. [Compiled Model class](@ref openvino_docs_ov_plugin_dg_compiled_model):
|
||||
- Is an execution configuration compiled for a particular device and takes into account its capabilities.
|
||||
- Holds a reference to a particular device and a task executor for this device.
|
||||
- Can create several instances of [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request).
|
||||
- Can export an internal backend specific graph structure to an output stream.
|
||||
3. [Inference Request class](@ref openvino_docs_ov_plugin_dg_infer_request):
|
||||
- Runs an inference pipeline serially.
|
||||
- Can extract performance counters for an inference pipeline execution profiling.
|
||||
4. [Asynchronous Inference Request class](@ref openvino_docs_ie_plugin_dg_async_infer_request):
|
||||
- Wraps the [Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) class and runs pipeline stages in parallel
|
||||
on several task executors based on a device-specific pipeline structure.
|
||||
4. [Asynchronous Inference Request class](@ref openvino_docs_ov_plugin_dg_async_infer_request):
|
||||
- Wraps the [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) class and runs pipeline stages in parallel on several task executors based on a device-specific pipeline structure.
|
||||
5. [Plugin specific properties](@ref openvino_docs_ov_plugin_dg_properties):
|
||||
- Provides the plugin specific properties.
|
||||
6. [Remote Context](@ref openvino_docs_ov_plugin_dg_remote_context):
|
||||
- Provides the device specific remote context. Context allows to create remote tensors.
|
||||
7. [Remote Tensor](@ref openvino_docs_ov_plugin_dg_remote_tensor)
|
||||
- Provides the device specific remote tensor API and implementation.
|
||||
|
||||
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin
|
||||
|
||||
development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
|
||||
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
|
||||
at `<openvino source dir>/src/plugins/template`.
|
||||
|
||||
|
||||
Detailed guides
|
||||
-----------------------
|
||||
|
||||
* [Build](@ref openvino_docs_ie_plugin_dg_plugin_build) a plugin library using CMake
|
||||
* Plugin and its components [testing](@ref openvino_docs_ie_plugin_dg_plugin_testing)
|
||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
|
||||
* [Build](@ref openvino_docs_ov_plugin_dg_plugin_build) a plugin library using CMake
|
||||
* Plugin and its components [testing](@ref openvino_docs_ov_plugin_dg_plugin_testing)
|
||||
* [Quantized networks](@ref openvino_docs_ov_plugin_dg_quantized_models)
|
||||
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
||||
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
|
||||
|
||||
API References
|
||||
-----------------------
|
||||
|
||||
* [Inference Engine Plugin API](@ref ie_dev_api)
|
||||
* [Inference Engine Transformation API](@ref ie_transformation_api)
|
||||
* [OpenVINO Plugin API](@ref ov_dev_api)
|
||||
* [OpenVINO Transformation API](@ref ie_transformation_api)
|
||||
|
||||
@@ -1,48 +1,50 @@
|
||||
# Plugin {#openvino_docs_ie_plugin_dg_plugin}
|
||||
# Plugin {#openvino_docs_ov_plugin_dg_plugin}
|
||||
|
||||
Inference Engine Plugin usually represents a wrapper around a backend. Backends can be:
|
||||
OpenVINO Plugin usually represents a wrapper around a backend. Backends can be:
|
||||
- OpenCL-like backend (e.g. clDNN library) for GPU devices.
|
||||
- oneDNN backend for Intel CPU devices.
|
||||
- NVIDIA cuDNN for NVIDIA GPUs.
|
||||
|
||||
The responsibility of Inference Engine Plugin:
|
||||
The responsibility of OpenVINO Plugin:
|
||||
- Initializes a backend and throw exception in `Engine` constructor if backend cannot be initialized.
|
||||
- Provides information about devices enabled by a particular backend, e.g. how many devices, their properties and so on.
|
||||
- Loads or imports [executable network](@ref openvino_docs_ie_plugin_dg_executable_network) objects.
|
||||
- Loads or imports [compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) objects.
|
||||
|
||||
In addition to the Inference Engine Public API, the Inference Engine provides the Plugin API, which is a set of functions and helper classes that simplify new plugin development:
|
||||
In addition to the OpenVINO Public API, the OpenVINO provides the Plugin API, which is a set of functions and helper classes that simplify new plugin development:
|
||||
|
||||
- header files in the `inference_engine/src/plugin_api` directory
|
||||
- implementations in the `inference_engine/src/inference_engine` directory
|
||||
- symbols in the Inference Engine Core shared library
|
||||
- header files in the `src/inference/dev_api/openvino` directory
|
||||
- implementations in the `src/inference/src/dev/` directory
|
||||
- symbols in the OpenVINO shared library
|
||||
|
||||
To build an Inference Engine plugin with the Plugin API, see the [Inference Engine Plugin Building](@ref openvino_docs_ie_plugin_dg_plugin_build) guide.
|
||||
To build an OpenVINO plugin with the Plugin API, see the [OpenVINO Plugin Building](@ref openvino_docs_ov_plugin_dg_plugin_build) guide.
|
||||
|
||||
Plugin Class
|
||||
------------------------
|
||||
|
||||
Inference Engine Plugin API provides the helper InferenceEngine::IInferencePlugin class recommended to use as a base class for a plugin.
|
||||
OpenVINO Plugin API provides the helper ov::IPlugin class recommended to use as a base class for a plugin.
|
||||
Based on that, declaration of a plugin class can look as follows:
|
||||
|
||||
@snippet template/src/plugin.hpp plugin:header
|
||||
|
||||
#### Class Fields
|
||||
### Class Fields
|
||||
|
||||
The provided plugin class also has several fields:
|
||||
|
||||
* `_backend` - a backend engine that is used to perform actual computations for network inference. For `Template` plugin `ngraph::runtime::Backend` is used which performs computations using OpenVINO™ reference implementations.
|
||||
* `_waitExecutor` - a task executor that waits for a response from a device about device tasks completion.
|
||||
* `_cfg` of type `Configuration`:
|
||||
* `m_backend` - a backend engine that is used to perform actual computations for model inference. For `Template` plugin `ov::runtime::Backend` is used which performs computations using OpenVINO™ reference implementations.
|
||||
* `m_waitExecutor` - a task executor that waits for a response from a device about device tasks completion.
|
||||
* `m_cfg` of type `Configuration`:
|
||||
|
||||
@snippet template/src/template_config.hpp configuration:header
|
||||
@snippet template/src/config.hpp configuration:header
|
||||
|
||||
As an example, a plugin configuration has three value parameters:
|
||||
|
||||
- `deviceId` - particular device ID to work with. Applicable if a plugin supports more than one `Template` device. In this case, some plugin methods, like `SetConfig`, `QueryNetwork`, and `LoadNetwork`, must support the CONFIG_KEY(KEY_DEVICE_ID) parameter.
|
||||
- `perfCounts` - boolean value to identify whether to collect performance counters during [Inference Request](@ref openvino_docs_ie_plugin_dg_infer_request) execution.
|
||||
- `_streamsExecutorConfig` - configuration of `InferenceEngine::IStreamsExecutor` to handle settings of multi-threaded context.
|
||||
- `device_id` - particular device ID to work with. Applicable if a plugin supports more than one `Template` device. In this case, some plugin methods, like `set_property`, `query_model`, and `compile_model`, must support the ov::device::id property.
|
||||
- `perf_counts` - boolean value to identify whether to collect performance counters during [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) execution.
|
||||
- `streams_executor_config` - configuration of `ov::threading::IStreamsExecutor` to handle settings of multi-threaded context.
|
||||
- `performance_mode` - configuration of `ov::hint::PerformanceMode` to set the performance mode.
|
||||
- `disable_transformations` - allows to disable transformations which are applied in the process of model compilation.
|
||||
|
||||
### Engine Constructor
|
||||
### Plugin Constructor
|
||||
|
||||
A plugin constructor must contain code that checks the ability to work with a device of the `Template`
|
||||
type. For example, if some drivers are required, the code must check
|
||||
@@ -50,132 +52,120 @@ driver availability. If a driver is not available (for example, OpenCL runtime i
|
||||
case of a GPU device or there is an improper version of a driver is on a host machine), an exception
|
||||
must be thrown from a plugin constructor.
|
||||
|
||||
A plugin must define a device name enabled via the `_pluginName` field of a base class:
|
||||
A plugin must define a device name enabled via the `set_device_name()` method of a base class:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:ctor
|
||||
|
||||
### `LoadExeNetworkImpl()`
|
||||
### Plugin Destructor
|
||||
|
||||
**Implementation details:** The base InferenceEngine::IInferencePlugin class provides a common implementation
|
||||
of the public InferenceEngine::IInferencePlugin::LoadNetwork method that calls plugin-specific `LoadExeNetworkImpl`, which is defined in a derived class.
|
||||
A plugin destructor must stop all plugins activities, and clean all allocated resources.
|
||||
|
||||
This is the most important function of the `Plugin` class and creates an instance of compiled `ExecutableNetwork`,
|
||||
which holds a backend-dependent compiled graph in an internal representation:
|
||||
@snippet template/src/plugin.cpp plugin:dtor
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:load_exe_network_impl
|
||||
### compile_model()
|
||||
|
||||
Before a creation of an `ExecutableNetwork` instance via a constructor, a plugin may check if a provided
|
||||
InferenceEngine::ICNNNetwork object is supported by a device. In the example above, the plugin checks precision information.
|
||||
The plugin should implement two `compile_model()` methods: the first one compiles model without remote context, the second one with remote context if plugin supports.
|
||||
|
||||
The very important part before creation of `ExecutableNetwork` instance is to call `TransformNetwork` method which applies OpenVINO™ transformation passes.
|
||||
This is the most important function of the `Plugin` class is to create an instance of compiled `CompiledModel`,
|
||||
which holds a backend-dependent compiled model in an internal representation:
|
||||
|
||||
Actual graph compilation is done in the `ExecutableNetwork` constructor. Refer to the [ExecutableNetwork Implementation Guide](@ref openvino_docs_ie_plugin_dg_executable_network) for details.
|
||||
@snippet template/src/plugin.cpp plugin:compile_model
|
||||
|
||||
> **NOTE**: Actual configuration map used in `ExecutableNetwork` is constructed as a base plugin
|
||||
> configuration set via `Plugin::SetConfig`, where some values are overwritten with `config` passed to `Plugin::LoadExeNetworkImpl`.
|
||||
> Therefore, the config of `Plugin::LoadExeNetworkImpl` has a higher priority.
|
||||
@snippet template/src/plugin.cpp plugin:compile_model_with_remote
|
||||
|
||||
### `TransformNetwork()`
|
||||
Before a creation of an `CompiledModel` instance via a constructor, a plugin may check if a provided
|
||||
ov::Model object is supported by a device if it is needed.
|
||||
|
||||
The function accepts a const shared pointer to `ov::Model` object and performs the following steps:
|
||||
Actual model compilation is done in the `CompiledModel` constructor. Refer to the [CompiledModel Implementation Guide](@ref openvino_docs_ov_plugin_dg_compiled_model) for details.
|
||||
|
||||
1. Deep copies a const object to a local object, which can later be modified.
|
||||
2. Applies common and plugin-specific transformations on a copied graph to make the graph more friendly to hardware operations. For details how to write custom plugin-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about network representation:
|
||||
> **NOTE**: Actual configuration map used in `CompiledModel` is constructed as a base plugin
|
||||
> configuration set via `Plugin::set_property`, where some values are overwritten with `config` passed to `Plugin::compile_model`.
|
||||
> Therefore, the config of `Plugin::compile_model` has a higher priority.
|
||||
|
||||
### transform_model()
|
||||
|
||||
The function accepts a const shared pointer to `ov::Model` object and applies common and device-specific transformations on a copied model to make it more friendly to hardware operations. For details how to write custom device-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about model representation:
|
||||
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
|
||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks).
|
||||
* [Quantized models](@ref openvino_docs_ov_plugin_dg_quantized_models).
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:transform_network
|
||||
@snippet template/src/plugin.cpp plugin:transform_model
|
||||
|
||||
> **NOTE**: After all these transformations, a `ov::Model` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing `A + B` operations at once, the `TransformNetwork` function should contain a pass which fuses operations `A` and `B` into a single custom operation `A + B` which fits backend kernels set.
|
||||
> **NOTE**: After all these transformations, an `ov::Model` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing `A + B` operations at once, the `transform_model` function should contain a pass which fuses operations `A` and `B` into a single custom operation `A + B` which fits backend kernels set.
|
||||
|
||||
### `QueryNetwork()`
|
||||
### query_model()
|
||||
|
||||
Use the method with the `HETERO` mode, which allows to distribute network execution between different
|
||||
Use the method with the `HETERO` mode, which allows to distribute model execution between different
|
||||
devices based on the `ov::Node::get_rt_info()` map, which can contain the `"affinity"` key.
|
||||
The `QueryNetwork` method analyzes operations of provided `network` and returns a list of supported
|
||||
operations via the InferenceEngine::QueryNetworkResult structure. The `QueryNetwork` firstly applies `TransformNetwork` passes to input `ov::Model` argument. After this, the transformed network in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (`_backend` has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in `_backend`):
|
||||
The `query_model` method analyzes operations of provided `model` and returns a list of supported
|
||||
operations via the ov::SupportedOpsMap structure. The `query_model` firstly applies `transform_model` passes to input `ov::Model` argument. After this, the transformed model in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (`m_backend` has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in `m_backend`):
|
||||
|
||||
1. Store original names of all operations in input `ov::Model`
|
||||
2. Apply `TransformNetwork` passes. Note, the names of operations in a transformed network can be different and we need to restore the mapping in the steps below.
|
||||
3. Construct `supported` and `unsupported` maps which contains names of original operations. Note, that since the inference is performed using OpenVINO™ reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
|
||||
4. `QueryNetworkResult.supportedLayersMap` contains only operations which are fully supported by `_backend`.
|
||||
2. Apply `transform_model` passes. Note, the names of operations in a transformed model can be different and we need to restore the mapping in the steps below.
|
||||
3. Construct `supported` map which contains names of original operations. Note, that since the inference is performed using OpenVINO™ reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
|
||||
4. `ov.SupportedOpsMap` contains only operations which are fully supported by `m_backend`.
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:query_network
|
||||
@snippet template/src/plugin.cpp plugin:query_model
|
||||
|
||||
### `SetConfig()`
|
||||
### set_property()
|
||||
|
||||
Sets new values for plugin configuration keys:
|
||||
Sets new values for plugin property keys:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:set_config
|
||||
@snippet template/src/plugin.cpp plugin:set_property
|
||||
|
||||
In the snippet above, the `Configuration` class overrides previous configuration values with the new
|
||||
ones. All these values are used during backend specific graph compilation and execution of inference requests.
|
||||
ones. All these values are used during backend specific model compilation and execution of inference requests.
|
||||
|
||||
> **NOTE**: The function must throw an exception if it receives an unsupported configuration key.
|
||||
|
||||
### `GetConfig()`
|
||||
### get_property()
|
||||
|
||||
Returns a current value for a specified configuration key:
|
||||
Returns a current value for a specified property key:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:get_config
|
||||
@snippet template/src/plugin.cpp plugin:get_property
|
||||
|
||||
The function is implemented with the `Configuration::Get` method, which wraps an actual configuration
|
||||
key value to the InferenceEngine::Parameter and returns it.
|
||||
key value to the ov::Any and returns it.
|
||||
|
||||
> **NOTE**: The function must throw an exception if it receives an unsupported configuration key.
|
||||
|
||||
### `GetMetric()`
|
||||
### import_model()
|
||||
|
||||
Returns a metric value for a metric with the name `name`. A device metric is a static type of information
|
||||
from a plugin about its devices or device capabilities.
|
||||
|
||||
Examples of metrics:
|
||||
|
||||
- METRIC_KEY(AVAILABLE_DEVICES) - list of available devices that are required to implement. In this case, you can use
|
||||
all devices of the same `Template` type with automatic logic of the `MULTI` device plugin.
|
||||
- METRIC_KEY(FULL_DEVICE_NAME) - full device name. In this case, a particular device ID is specified
|
||||
in the `option` parameter as `{ CONFIG_KEY(KEY_DEVICE_ID), "deviceID" }`.
|
||||
- METRIC_KEY(SUPPORTED_METRICS) - list of metrics supported by a plugin
|
||||
- METRIC_KEY(SUPPORTED_CONFIG_KEYS) - list of configuration keys supported by a plugin that
|
||||
affects their behavior during a backend specific graph compilation or an inference requests execution
|
||||
- METRIC_KEY(OPTIMIZATION_CAPABILITIES) - list of optimization capabilities of a device.
|
||||
For example, supported data types and special optimizations for them.
|
||||
- Any other device-specific metrics. In this case, place metrics declaration and possible values to
|
||||
a plugin-specific public header file, for example, `template/config.hpp`. The example below
|
||||
demonstrates the definition of a new optimization capability value specific for a device:
|
||||
|
||||
@snippet template/config.hpp public_header:properties
|
||||
|
||||
The snippet below provides an example of the implementation for `GetMetric`:
|
||||
|
||||
> **NOTE**: If an unsupported metric key is passed to the function, it must throw an exception.
|
||||
|
||||
### `ImportNetwork()`
|
||||
|
||||
The importing network mechanism allows to import a previously exported backend specific graph and wrap it
|
||||
using an [ExecutableNetwork](@ref openvino_docs_ie_plugin_dg_executable_network) object. This functionality is useful if
|
||||
backend specific graph compilation takes significant time and/or cannot be done on a target host
|
||||
The importing of compiled model mechanism allows to import a previously exported backend specific model and wrap it
|
||||
using an [CompiledModel](@ref openvino_docs_ov_plugin_dg_compiled_model) object. This functionality is useful if
|
||||
backend specific model compilation takes significant time and/or cannot be done on a target host
|
||||
device due to other reasons.
|
||||
|
||||
During export of backend specific graph using `ExecutableNetwork::Export`, a plugin may export any
|
||||
type of information it needs to import a compiled graph properly and check its correctness.
|
||||
During export of backend specific model using `CompiledModel::export_model`, a plugin may export any
|
||||
type of information it needs to import a compiled model properly and check its correctness.
|
||||
For example, the export information may include:
|
||||
|
||||
- Compilation options (state of `Plugin::_cfg` structure)
|
||||
- Compilation options (state of `Plugin::m_cfg` structure)
|
||||
- Information about a plugin and a device type to check this information later during the import and
|
||||
throw an exception if the `model` stream contains wrong data. For example, if devices have different
|
||||
capabilities and a graph compiled for a particular device cannot be used for another, such type of
|
||||
capabilities and a model compiled for a particular device cannot be used for another, such type of
|
||||
information must be stored and checked during the import.
|
||||
- Compiled backend specific graph itself
|
||||
- Information about precisions and shapes set by the user
|
||||
- Compiled backend specific model itself
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:import_network
|
||||
@snippet template/src/plugin.cpp plugin:import_model
|
||||
@snippet template/src/plugin.cpp plugin:import_model_with_remote
|
||||
|
||||
### create_context()
|
||||
|
||||
The Plugin should implement `Plugin::create_context()` method which returns `ov::RemoteContext` in case if plugin supports remote context, in other case the plugin can throw an exception that this method is not implemented.
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:create_context
|
||||
|
||||
### get_default_context()
|
||||
|
||||
`Plugin::get_default_context()` also needed in case if plugin supports remote context, if the plugin doesn't support it, this method can throw an exception that functionality is not implemented.
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:get_default_context
|
||||
|
||||
Create Instance of Plugin Class
|
||||
------------------------
|
||||
|
||||
Inference Engine plugin library must export only one function creating a plugin instance using IE_DEFINE_PLUGIN_CREATE_FUNCTION macro:
|
||||
OpenVINO plugin library must export only one function creating a plugin instance using OV_DEFINE_PLUGIN_CREATE_FUNCTION macro:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:create_plugin_engine
|
||||
|
||||
Next step in a plugin library implementation is the [ExecutableNetwork](@ref openvino_docs_ie_plugin_dg_executable_network) class.
|
||||
Next step in a plugin library implementation is the [CompiledModel](@ref openvino_docs_ov_plugin_dg_compiled_model) class.
|
||||
|
||||
@@ -1,17 +1,16 @@
|
||||
# Plugin Testing {#openvino_docs_ie_plugin_dg_plugin_testing}
|
||||
# Plugin Testing {#openvino_docs_ov_plugin_dg_plugin_testing}
|
||||
|
||||
Inference Engine (IE) tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the Inference Engine public API.
|
||||
OpenVINO tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the OpenVINO public API.
|
||||
All the tests are written in the [Google Test C++ framework](https://github.com/google/googletest).
|
||||
|
||||
Inference Engine Plugin tests are included in the `IE::funcSharedTests` CMake target which is built within the OpenVINO repository
|
||||
(see [Build Plugin Using CMake](@ref openvino_docs_ie_plugin_dg_plugin_build) guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on.
|
||||
OpenVINO Plugin tests are included in the `openvino::funcSharedTests` CMake target which is built within the OpenVINO repository
|
||||
(see [Build Plugin Using CMake](@ref openvino_docs_ov_plugin_dg_plugin_build) guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on.
|
||||
|
||||
Test definitions are split into tests class declaration (see `inference_engine/tests/functional/plugin/shared/include`) and tests class implementation (see `inference_engine/tests/functional/plugin/shared/src`) and include the following scopes of plugin conformance tests:
|
||||
Test definitions are split into tests class declaration (see `src/tests/functional/plugin/shared/include`) and tests class implementation (see `src/tests/functional/plugin/shared/src`) and include the following scopes of plugin conformance tests:
|
||||
|
||||
1. **Behavior tests** (`behavior` sub-folder), which are a separate test group to check that a plugin satisfies basic Inference
|
||||
Engine concepts: plugin creation, multiple executable networks support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters.
|
||||
1. **Behavior tests** (`behavior` sub-folder), which are a separate test group to check that a plugin satisfies basic OpenVINO concepts: plugin creation, multiple compiled models support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters.
|
||||
|
||||
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `IE::funcSharedTests` library:
|
||||
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `openvino::funcSharedTests` library:
|
||||
|
||||
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
|
||||
@snippet single_layer/convolution.hpp test_convolution:definition
|
||||
@@ -23,29 +22,24 @@ Engine concepts: plugin creation, multiple executable networks support, multiple
|
||||
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
|
||||
|
||||
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
|
||||
> **Note**, such sub-graphs or patterns for sub-graph tests should be added to `IE::ngraphFunctions` library first (this library is a pre-defined set of small `ov::Model`) and re-used in sub-graph tests after.
|
||||
> **Note**, such sub-graphs or patterns for sub-graph tests should be added to `openvino::ngraphFunctions` library first (this library is a pre-defined set of small `ov::Model`) and re-used in sub-graph tests after.
|
||||
|
||||
4. **HETERO tests** (`subgraph_tests` sub-folder) contains tests for `HETERO` scenario (manual or automatic affinities settings, tests for `QueryNetwork`).
|
||||
4. **HETERO tests** (`subgraph_tests` sub-folder) contains tests for `HETERO` scenario (manual or automatic affinities settings, tests for `query_model`).
|
||||
|
||||
5. **Other tests**, which contain tests for other scenarios and has the following types of tests:
|
||||
- Tests for execution graph
|
||||
- Etc.
|
||||
|
||||
To use these tests for your own plugin development, link the `IE::funcSharedTests` library to your test binary and instantiate required test cases with desired parameters values.
|
||||
To use these tests for your own plugin development, link the `openvino::funcSharedTests` library to your test binary and instantiate required test cases with desired parameters values.
|
||||
|
||||
> **NOTE**: A plugin may contain its own tests for use cases that are specific to hardware or need to be extensively tested.
|
||||
|
||||
To build test binaries together with other build artifacts, use the `make all` command. For details, see
|
||||
[Build Plugin Using CMake*](@ref openvino_docs_ie_plugin_dg_plugin_build).
|
||||
[Build Plugin Using CMake*](@ref openvino_docs_ov_plugin_dg_plugin_build).
|
||||
|
||||
### How to Extend Inference Engine Plugin Tests
|
||||
### How to Extend OpenVINO Plugin Tests
|
||||
|
||||
Inference Engine Plugin tests are open for contribution.
|
||||
Add common test case definitions applicable for all plugins to the `IE::funcSharedTests` target within the OpenVINO repository. Then, any other plugin supporting corresponding functionality can instantiate the new test.
|
||||
|
||||
All Inference Engine per-layer tests check test layers functionality. They are developed using ov::Model.
|
||||
as input graphs used by tests. In this case, to test a new layer with layer tests, extend
|
||||
the `IE::ngraphFunctions` library, which is also included in the Inference Engine Developer package, with a new model.
|
||||
including the corresponding operation.
|
||||
OpenVINO Plugin tests are open for contribution.
|
||||
Add common test case definitions applicable for all plugins to the `openvino::funcSharedTests` target within the OpenVINO repository. Then, any other plugin supporting corresponding functionality can instantiate the new test.
|
||||
|
||||
> **NOTE**: When implementing a new subgraph test, add new single-layer tests for each operation of the subgraph if such test does not exist.
|
||||
|
||||
10
docs/IE_PLUGIN_DG/Properties.md
Normal file
10
docs/IE_PLUGIN_DG/Properties.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# Plugin Properties {#openvino_docs_ov_plugin_dg_properties}
|
||||
|
||||
Plugin can provide own device specific properties.
|
||||
|
||||
Property Class
|
||||
------------------------
|
||||
|
||||
OpenVINO API provides the interface ov::Property which allows to define the property and access rights. Based on that, a declaration of plugin specific properties can look as follows:
|
||||
|
||||
@snippet include/template/properties.hpp properties:public_header
|
||||
@@ -1,8 +1,8 @@
|
||||
# Quantized networks compute and restrictions {#openvino_docs_ie_plugin_dg_quantized_networks}
|
||||
# Quantized models compute and restrictions {#openvino_docs_ov_plugin_dg_quantized_models}
|
||||
|
||||
One of the feature of Inference Engine is the support of quantized networks with different precisions: INT8, INT4, etc.
|
||||
One of the feature of OpenVINO is the support of quantized models with different precisions: INT8, INT4, etc.
|
||||
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
|
||||
All quantized networks which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
|
||||
All quantized models which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
|
||||
For more details about low-precision model representation please refer to this [document](@ref openvino_docs_ie_plugin_dg_lp_representation).
|
||||
|
||||
### Interpreting FakeQuantize at runtime
|
||||
@@ -44,6 +44,6 @@ Below we define these rules as follows:
|
||||
- Per-channel quantization of activations for channel-wise and element-wise operations, e.g. Depthwise Convolution, Eltwise Add/Mul, ScaleShift.
|
||||
- Symmetric and asymmetric quantization of weights and activations with the support of per-channel scales and zero-points.
|
||||
- Non-unified quantization parameters for Eltwise and Concat operations.
|
||||
- Non-quantized network output, i.e. there are no quantization parameters for it.
|
||||
- Non-quantized models output, i.e. there are no quantization parameters for it.
|
||||
|
||||
[qdq_propagation]: images/qdq_propagation.png
|
||||
|
||||
49
docs/IE_PLUGIN_DG/RemoteContext.md
Normal file
49
docs/IE_PLUGIN_DG/RemoteContext.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Remote Context {#openvino_docs_ov_plugin_dg_remote_context}
|
||||
|
||||
ov::RemoteContext class functionality:
|
||||
- Represents device specific inference context.
|
||||
- Allows to create remote device specific tensor.
|
||||
|
||||
> **NOTE**: If plugin provides a public API for own Remote Context, the API should be header only and doesn't depend on the plugin library.
|
||||
|
||||
|
||||
RemoteContext Class
|
||||
------------------------
|
||||
|
||||
OpenVINO Plugin API provides the interface ov::IRemoteContext which should be used as a base class for a plugin specific remote context. Based on that, a declaration of an compiled model class can look as follows:
|
||||
|
||||
@snippet src/remote_context.hpp remote_context:header
|
||||
|
||||
### Class Fields
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `m_name` - Device name.
|
||||
- `m_property` - Device specific context properties. It can be used to cast RemoteContext to device specific type.
|
||||
|
||||
### RemoteContext Constructor
|
||||
|
||||
This constructor should initialize the remote context device name and properties.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:ctor
|
||||
|
||||
### get_device_name()
|
||||
|
||||
The function returns the device name from the remote context.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:get_device_name
|
||||
|
||||
### get_property()
|
||||
|
||||
The implementation returns the remote context properties.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:get_property
|
||||
|
||||
|
||||
### create_tensor()
|
||||
|
||||
The method creates device specific remote tensor.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:create_tensor
|
||||
|
||||
The next step to support device specific tensors is a creation of device specific [Remote Tensor](@ref openvino_docs_ov_plugin_dg_remote_tensor) class.
|
||||
87
docs/IE_PLUGIN_DG/RemoteTensor.md
Normal file
87
docs/IE_PLUGIN_DG/RemoteTensor.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Remote Tensor {#openvino_docs_ov_plugin_dg_remote_tensor}
|
||||
|
||||
ov::RemoteTensor class functionality:
|
||||
- Provide an interface to work with device specific memory.
|
||||
|
||||
> **NOTE**: If plugin provides a public API for own Remote Tensor, the API should be header only and doesn't depend on the plugin library.
|
||||
|
||||
|
||||
Device Specific Remote Tensor Public API
|
||||
------------------------------------------
|
||||
|
||||
The public interface to work with device specific remote tensors should have header only implementation and doesn't depend on the plugin library.
|
||||
|
||||
@snippet include/template/remote_tensor.hpp remote_tensor:public_header
|
||||
|
||||
The implementation below has several methods:
|
||||
|
||||
### type_check()
|
||||
|
||||
Static method is used to understand that some abstract remote tensor can be casted to this particular remote tensor type.
|
||||
|
||||
### get_data()
|
||||
|
||||
The set of methods (specific for the example, other implementation can have another API) which are helpers to get an access to remote data.
|
||||
|
||||
Device Specific Internal tensor implementation
|
||||
-----------------------------------------------
|
||||
|
||||
The plugin should have the internal implementation of remote tensor which can communicate with public API.
|
||||
The example contains the implementation of remote tensor which wraps memory from stl vector.
|
||||
|
||||
OpenVINO Plugin API provides the interface ov::IRemoteTensor which should be used as a base class for remote tensors.
|
||||
|
||||
The example implementation have two remote tensor classes:
|
||||
|
||||
- Internal type dependent implementation which has as an template argument the vector type and create the type specific tensor.
|
||||
- The type independent implementation which works with type dependent tensor inside.
|
||||
|
||||
Based on that, an implementation of a type independent remote tensor class can look as follows:
|
||||
|
||||
@snippet src/remote_context.cpp vector_impl:implementation
|
||||
|
||||
The implementation provides a helper to get wrapped stl tensor and overrides all important methods of ov::IRemoteTensor class and recall the type dependent implementation.
|
||||
|
||||
The type dependent remote tensor has the next implementation:
|
||||
|
||||
@snippet src/remote_context.cpp vector_impl_t:implementation
|
||||
|
||||
### Class Fields
|
||||
|
||||
The class has several fields:
|
||||
|
||||
- `m_element_type` - Tensor element type.
|
||||
- `m_shape` - Tensor shape.
|
||||
- `m_strides` - Tensor strides.
|
||||
- `m_data` - Wrapped vector.
|
||||
- `m_dev_name` - Device name.
|
||||
- `m_properties` - Remote tensor specific properties which can be used to detect the type of the remote tensor.
|
||||
|
||||
### VectorTensorImpl()
|
||||
|
||||
The constructor of remote tensor implementation. Creates a vector with data, initialize device name and properties, updates shape, element type and strides.
|
||||
|
||||
|
||||
### get_element_type()
|
||||
|
||||
The method returns tensor element type.
|
||||
|
||||
### get_shape()
|
||||
|
||||
The method returns tensor shape.
|
||||
|
||||
### get_strides()
|
||||
|
||||
The method returns tensor strides.
|
||||
|
||||
### set_shape()
|
||||
|
||||
The method allows to set new shapes for the remote tensor.
|
||||
|
||||
### get_properties()
|
||||
|
||||
The method returns tensor specific properties.
|
||||
|
||||
### get_device_name()
|
||||
|
||||
The method returns tensor specific device name.
|
||||
@@ -6,13 +6,13 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_ie_plugin_dg_quantized_networks
|
||||
openvino_docs_ov_plugin_dg_quantized_models
|
||||
openvino_docs_OV_UG_lpt
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
|
||||
|
||||
* [Quantized networks](@ref openvino_docs_ie_plugin_dg_quantized_networks)
|
||||
* [Quantized networks](@ref openvino_docs_ov_plugin_dg_quantized_models)
|
||||
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
||||
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
|
||||
|
||||
@@ -6,12 +6,12 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
../groupie_dev_api
|
||||
../groupov_dev_api
|
||||
../groupie_transformation_api
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The guides below provides extra API references needed for OpenVINO plugin development:
|
||||
|
||||
* [OpenVINO Plugin API](@ref ie_dev_api)
|
||||
* [OpenVINO Plugin API](@ref ov_dev_api)
|
||||
* [OpenVINO Transformation API](@ref ie_transformation_api)
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
<navindex>
|
||||
<!-- Steps -->
|
||||
<tab type="usergroup" url="index.html" visibile="yes" title="GUIDE">
|
||||
<tab type="usergroup" url="index.html" title="Developer Guide for Inference Engine Plugin Library">
|
||||
<tab type="usergroup" url="index.html" title="Developer Guide for OpenVINO Plugin Library">
|
||||
<tab type="user" url="@ref plugin" visibile="yes" title="Implement Plugin Functionality"/>
|
||||
<tab type="user" url="@ref executable_network" visibile="yes" title="Implement Executable Network Functionality">
|
||||
<tab type="user" url="@ref compiled_model" visibile="yes" title="Implement Executable Network Functionality">
|
||||
<tab type="usergroup" title="Low Precision Transformations" url="@ref openvino_docs_OV_UG_lpt">
|
||||
<tab type="user" title="Attributes" url="@ref openvino_docs_OV_UG_lpt_attributes">
|
||||
<tab type="user" title="AvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_AvgPoolPrecisionPreserved"/>
|
||||
@@ -25,6 +25,7 @@
|
||||
<tab type="user" title="CreateAttribute" url="@ref openvino_docs_OV_UG_lpt_CreateAttribute"/>
|
||||
<tab type="user" title="CreatePrecisionsDependentAttribute" url="@ref openvino_docs_OV_UG_lpt_CreatePrecisionsDependentAttribute"/>
|
||||
<tab type="user" title="MarkupAvgPoolPrecisionPreserved" url="@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved"/>
|
||||
<tab type="user" title="MarkupBias" url="@ref openvino_docs_OV_UG_lpt_MarkupBias"/>
|
||||
<tab type="user" title="MarkupCanBeQuantized" url="@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized"/>
|
||||
<tab type="user" title="MarkupPerTensorQuantization" url="@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization"/>
|
||||
<tab type="user" title="MarkupPrecisions" url="@ref openvino_docs_OV_UG_lpt_MarkupPrecisions"/>
|
||||
@@ -79,6 +80,9 @@
|
||||
</tab>
|
||||
<tab type="user" url="@ref infer_request" visibile="yes" title="Implement Synchronous Inference Request"/>
|
||||
<tab type="user" url="@ref async_infer_request" visibile="yes" title="Implement Asynchronous Inference Request"/>
|
||||
<tab type="user" url="@ref properties" visibile="yes" title="Provide Plugin Specific Properties"/>
|
||||
<tab type="user" url="@ref remote_context" visibile="yes" title="Implement Remote Context"/>
|
||||
<tab type="user" url="@ref remote_tensor" visibile="yes" title="Implement Remote Tensor"/>
|
||||
</tab>
|
||||
</tab>
|
||||
<!-- Additional resources -->
|
||||
@@ -89,10 +93,10 @@
|
||||
</tab>
|
||||
<!-- API References -->
|
||||
<tab type="usergroup" title="API REFERENCE">
|
||||
<!-- IE Plugin API -->
|
||||
<tab type="user" url="group__ie__dev__api.html" visible="yes" title="Inference Engine Plugin API Reference"/>
|
||||
<!-- IE Transformations API -->
|
||||
<tab type="user" url="group__ie__transformation__api.html" visible="yes" title="Inference Engine Transformations API Reference"/>
|
||||
<!-- OpenVINO Plugin API -->
|
||||
<tab type="user" url="group__ov__dev__api.html" visible="yes" title="OpenVINO Plugin API Reference"/>
|
||||
<!-- OpenVINO Transformations API -->
|
||||
<tab type="user" url="group__ie__transformation__api.html" visible="yes" title="OpenVINO Transformations API Reference"/>
|
||||
</tab>
|
||||
<tab type="usergroup" title="MAIN OPENVINO™ DOCS" url="../index.html"/>
|
||||
</navindex>
|
||||
|
||||
@@ -128,6 +128,7 @@ The model on this step is changed. There are more details in developer guide [Pr
|
||||
|
||||
### Step 2. Markup
|
||||
This step creates runtime attributes for operations. These attributes will be used in next step. Transformations:
|
||||
* [MarkupBias](@ref openvino_docs_OV_UG_lpt_MarkupBias)
|
||||
* [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
|
||||
* [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
|
||||
* [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
|
||||
|
||||
@@ -2,18 +2,20 @@
|
||||
|
||||
This step defines the optimal `FakeQuantize` decomposition precisions for the best inference performance via operations markup with runtime attribute instances. Attributes are created for input and output ports and operations. Transformations do not change the operation output port precisions. A model markup low precision logic is decomposed and implemented into the following common markup transformations. The order of transformations is important:
|
||||
|
||||
1. [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
|
||||
2. [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
|
||||
3. [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
|
||||
4. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
|
||||
5. [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
|
||||
6. [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
|
||||
7. [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
|
||||
1. [MarkupBias](@ref openvino_docs_OV_UG_lpt_MarkupBias)
|
||||
2. [MarkupCanBeQuantized](@ref openvino_docs_OV_UG_lpt_MarkupCanBeQuantized)
|
||||
3. [MarkupPrecisions](@ref openvino_docs_OV_UG_lpt_MarkupPrecisions)
|
||||
4. [MarkupPerTensorQuantization](@ref openvino_docs_OV_UG_lpt_MarkupPerTensorQuantization)
|
||||
5. [MarkupAvgPoolPrecisionPreserved](@ref openvino_docs_OV_UG_lpt_MarkupAvgPoolPrecisionPreserved)
|
||||
6. [PropagatePrecisions](@ref openvino_docs_OV_UG_lpt_PropagatePrecisions)
|
||||
7. [AlignQuantizationIntervals](@ref openvino_docs_OV_UG_lpt_AlignQuantizationIntervals)
|
||||
8. [AlignQuantizationParameters](@ref openvino_docs_OV_UG_lpt_AlignQuantizationParameters)
|
||||
|
||||
The table of transformations and used attributes:
|
||||
|
||||
| Transformation name | Create attributes | Use attributes |
|
||||
|---------------------------------|-------------------------------|-------------------------------------------|
|
||||
| MarkupBias | Bias | |
|
||||
| MarkupCanBeQuantized | Precisions | |
|
||||
| MarkupPrecisions | Precisions,PrecisionPreserved | |
|
||||
| MarkupPerTensorQuantization | PerTensorQuantization | |
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
# MarkupBias transformation {#openvino_docs_OV_UG_lpt_MarkupBias}
|
||||
|
||||
ngraph::pass::low_precision::MarkupBias class represents the `MarkupBias` transformation.
|
||||
@@ -1,5 +1,8 @@
|
||||
# Legal Information {#openvino_docs_Legal_Information}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
||||
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
|
||||
|
||||
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
|
||||
@@ -12,9 +15,16 @@ OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Kh
|
||||
|
||||
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
|
||||
|
||||
## OpenVINO™ Logo
|
||||
OpenVINO™ Logo
|
||||
###########################################################
|
||||
To build equity around the project, the OpenVINO logo was created for both Intel and community usage. The logo may only be used to represent the OpenVINO toolkit and offerings built using the OpenVINO toolkit.
|
||||
|
||||
## Logo Usage Guidelines
|
||||
Logo Usage Guidelines
|
||||
###########################################################
|
||||
The OpenVINO logo must be used in connection with truthful, non-misleading references to the OpenVINO toolkit, and for no other purpose.
|
||||
Modification of the logo or use of any separate element(s) of the logo alone is not allowed.
|
||||
Modification of the logo or use of any separate element(s) of the logo alone is not allowed.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
|
||||
@@ -15,102 +15,128 @@
|
||||
openvino_docs_MO_DG_FP16_Compression
|
||||
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
Model Optimizer is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
|
||||
|
||||
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with [OpenVINO™ Runtime](../OV_Runtime_UG/openvino_intro.md).
|
||||
To use it, you need a pre-trained deep learning model in one of the supported formats: TensorFlow, PyTorch, PaddlePaddle, MXNet, Caffe, Kaldi, or ONNX. Model Optimizer converts the model to the OpenVINO Intermediate Representation format (IR), which you can infer later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
|
||||
|
||||
Note that Model Optimizer does not infer models.
|
||||
|
||||
The figure below illustrates the typical workflow for deploying a trained deep learning model:
|
||||
|
||||

|
||||
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
|
||||
|
||||
where IR is a pair of files describing the model:
|
||||
|
||||
* <code>.xml</code> - Describes the network topology.
|
||||
* ``.xml`` - Describes the network topology.
|
||||
|
||||
* <code>.bin</code> - Contains the weights and biases binary data.
|
||||
* ``.bin`` - Contains the weights and biases binary data.
|
||||
|
||||
The OpenVINO IR can be additionally optimized for inference by [Post-training optimization](../../tools/pot/docs/Introduction.md) that applies post-training quantization methods.
|
||||
The OpenVINO IR can be additionally optimized for inference by :doc:`Post-training optimization <pot_introduction>` that applies post-training quantization methods.
|
||||
|
||||
## How to Run Model Optimizer
|
||||
How to Run Model Optimizer
|
||||
##########################
|
||||
|
||||
To convert a model to IR, you can run Model Optimizer by using the following command:
|
||||
|
||||
```sh
|
||||
mo --input_model INPUT_MODEL
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
If the out-of-the-box conversion (only the `--input_model` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
|
||||
mo --input_model INPUT_MODEL
|
||||
|
||||
- Model Optimizer provides two parameters to override original input shapes for model conversion: `--input` and `--input_shape`.
|
||||
For more information about these parameters, refer to the [Setting Input Shapes](prepare_model/convert_model/Converting_Model.md) guide.
|
||||
|
||||
If the out-of-the-box conversion (only the ``--input_model`` parameter is specified) is not successful, use the parameters mentioned below to override input shapes and cut the model:
|
||||
|
||||
- Model Optimizer provides two parameters to override original input shapes for model conversion: ``--input`` and ``--input_shape``.
|
||||
For more information about these parameters, refer to the :doc:`Setting Input Shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
||||
|
||||
- To cut off unwanted parts of a model (such as unsupported operations and training sub-graphs),
|
||||
use the `--input` and `--output` parameters to define new inputs and outputs of the converted model.
|
||||
For a more detailed description, refer to the [Cutting Off Parts of a Model](prepare_model/convert_model/Cutting_Model.md) guide.
|
||||
use the ``--input`` and ``--output`` parameters to define new inputs and outputs of the converted model.
|
||||
For a more detailed description, refer to the :doc:`Cutting Off Parts of a Model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>` guide.
|
||||
|
||||
You can also insert additional input pre-processing sub-graphs into the converted model by using
|
||||
the `--mean_values`, `scales_values`, `--layout`, and other parameters described
|
||||
in the [Embedding Preprocessing Computation](prepare_model/Additional_Optimizations.md) article.
|
||||
the ``--mean_values``, ``scales_values``, ``--layout``, and other parameters described
|
||||
in the :doc:`Embedding Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` article.
|
||||
|
||||
The `--compress_to_fp16` compression parameter in Model Optimizer allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to `FP16` data type. For more details, refer to the [Compression of a Model to FP16](prepare_model/FP16_Compression.md) guide.
|
||||
The ``--compress_to_fp16`` compression parameter in Model Optimizer allows generating IR with constants (for example, weights for convolutions and matrix multiplications) compressed to ``FP16`` data type. For more details, refer to the :doc:`Compression of a Model to FP16 <openvino_docs_MO_DG_FP16_Compression>` guide.
|
||||
|
||||
To get the full list of conversion parameters available in Model Optimizer, run the following command:
|
||||
|
||||
```sh
|
||||
mo --help
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
## Examples of CLI Commands
|
||||
mo --help
|
||||
|
||||
|
||||
Examples of CLI Commands
|
||||
########################
|
||||
|
||||
Below is a list of separate examples for different frameworks and Model Optimizer parameters:
|
||||
|
||||
1. Launch Model Optimizer for a TensorFlow MobileNet model in the binary protobuf format:
|
||||
```sh
|
||||
mo --input_model MobileNet.pb
|
||||
```
|
||||
Launch Model Optimizer for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly
|
||||
where the batch size and the sequence length equal 2 and 30 respectively:
|
||||
```sh
|
||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||
```
|
||||
For more information, refer to the [Converting a TensorFlow Model](prepare_model/convert_model/Convert_Model_From_TensorFlow.md) guide.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model MobileNet.pb
|
||||
|
||||
|
||||
Launch Model Optimizer for a TensorFlow BERT model in the SavedModel format with three inputs. Specify input shapes explicitly where the batch size and the sequence length equal 2 and 30 respectively:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||
|
||||
For more information, refer to the :doc:`Converting a TensorFlow Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>` guide.
|
||||
|
||||
2. Launch Model Optimizer for an ONNX OCR model and specify new output explicitly:
|
||||
```sh
|
||||
mo --input_model ocr.onnx --output probabilities
|
||||
```
|
||||
For more information, refer to the [Converting an ONNX Model](prepare_model/convert_model/Convert_Model_From_ONNX.md) guide.
|
||||
|
||||
> **NOTE**: PyTorch models must be exported to the ONNX format before conversion into IR. More information can be found in [Converting a PyTorch Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md).
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model ocr.onnx --output probabilities
|
||||
|
||||
|
||||
For more information, refer to the :doc:`Converting an ONNX Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` guide.
|
||||
|
||||
.. note::
|
||||
|
||||
PyTorch models must be exported to the ONNX format before conversion into IR. More information can be found in :doc:`Converting a PyTorch Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`.
|
||||
|
||||
3. Launch Model Optimizer for a PaddlePaddle UNet model and apply mean-scale normalization to the input:
|
||||
```sh
|
||||
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
|
||||
```
|
||||
For more information, refer to the [Converting a PaddlePaddle Model](prepare_model/convert_model/Convert_Model_From_Paddle.md) guide.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
|
||||
|
||||
|
||||
For more information, refer to the :doc:`Converting a PaddlePaddle Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>` guide.
|
||||
|
||||
4. Launch Model Optimizer for an Apache MXNet SSD Inception V3 model and specify first-channel layout for the input:
|
||||
```sh
|
||||
mo --input_model ssd_inception_v3-0000.params --layout NCHW
|
||||
```
|
||||
For more information, refer to the [Converting an Apache MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md) guide.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model ssd_inception_v3-0000.params --layout NCHW
|
||||
|
||||
|
||||
For more information, refer to the :doc:`Converting an Apache MXNet Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>` guide.
|
||||
|
||||
5. Launch Model Optimizer for a Caffe AlexNet model with input channels in the RGB format which needs to be reversed:
|
||||
```sh
|
||||
mo --input_model alexnet.caffemodel --reverse_input_channels
|
||||
```
|
||||
For more information, refer to the [Converting a Caffe Model](prepare_model/convert_model/Convert_Model_From_Caffe.md) guide.
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model alexnet.caffemodel --reverse_input_channels
|
||||
|
||||
|
||||
For more information, refer to the :doc:`Converting a Caffe Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>` guide.
|
||||
|
||||
6. Launch Model Optimizer for a Kaldi LibriSpeech nnet2 model:
|
||||
```sh
|
||||
mo --input_model librispeech_nnet2.mdl --input_shape [1,140]
|
||||
```
|
||||
For more information, refer to the [Converting a Kaldi Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md) guide.
|
||||
|
||||
- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, Apache MXNet, and Kaldi models,
|
||||
refer to the [Model Conversion Tutorials](prepare_model/convert_model/Convert_Model_Tutorials.md).
|
||||
- For more information about IR, see [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](IR_and_opsets.md).
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model librispeech_nnet2.mdl --input_shape [1,140]
|
||||
|
||||
|
||||
For more information, refer to the :doc:`Converting a Kaldi Model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>` guide.
|
||||
|
||||
- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, Apache MXNet, and Kaldi models, refer to the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>`.
|
||||
- For more information about IR, see :doc:`Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™ <openvino_docs_MO_DG_IR_and_opsets>`.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,102 +1,156 @@
|
||||
# Embedding Preprocessing Computation {#openvino_docs_MO_DG_Additional_Optimization_Use_Cases}
|
||||
|
||||
Input data for inference can be different from the training dataset and requires additional preprocessing before inference.
|
||||
To accelerate the whole pipeline including preprocessing and inference, Model Optimizer provides special parameters such as `--mean_values`,
|
||||
@sphinxdirective
|
||||
|
||||
`--scale_values`, `--reverse_input_channels`, and `--layout`. Based on these parameters, Model Optimizer generates OpenVINO IR with additionally
|
||||
inserted sub-graphs to perform the defined preprocessing. This preprocessing block can perform mean-scale normalization of input data,
|
||||
reverting data along channel dimension, and changing the data layout.
|
||||
See the following sections for details on the parameters, or the [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md) for the same functionality in OpenVINO Runtime.
|
||||
Input data for inference can be different from the training dataset and requires
|
||||
additional preprocessing before inference. To accelerate the whole pipeline including
|
||||
preprocessing and inference, Model Optimizer provides special parameters such as ``--mean_values``,
|
||||
|
||||
## Specifying Layout
|
||||
``--scale_values``, ``--reverse_input_channels``, and ``--layout``. Based on these
|
||||
parameters, Model Optimizer generates OpenVINO IR with additionally inserted sub-graphs
|
||||
to perform the defined preprocessing. This preprocessing block can perform mean-scale
|
||||
normalization of input data, reverting data along channel dimension, and changing
|
||||
the data layout. See the following sections for details on the parameters, or the
|
||||
:doc:`Overview of Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
|
||||
for the same functionality in OpenVINO Runtime.
|
||||
|
||||
You may need to set input layouts, as it is required by some preprocessing, for example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
|
||||
Specifying Layout
|
||||
#################
|
||||
|
||||
Layout defines the meaning of dimensions in shape and can be specified for both inputs and outputs. Some preprocessing requires to set input layouts, for example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
|
||||
You may need to set input layouts, as it is required by some preprocessing, for
|
||||
example, setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
|
||||
|
||||
For the layout syntax, check the [Layout API overview](../../OV_Runtime_UG/layout_overview.md).
|
||||
To specify the layout, you can use the `--layout` option followed by the layout value.
|
||||
Layout defines the meaning of dimensions in shape and can be specified for both
|
||||
inputs and outputs. Some preprocessing requires to set input layouts, for example,
|
||||
setting a batch, applying mean or scales, and reversing input channels (BGR<->RGB).
|
||||
|
||||
For example, the following command specifies the `NHWC` layout for a Tensorflow `nasnet_large` model that was exported to the ONNX format:
|
||||
For the layout syntax, check the :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`.
|
||||
To specify the layout, you can use the ``--layout`` option followed by the layout value.
|
||||
|
||||
```
|
||||
mo --input_model tf_nasnet_large.onnx --layout nhwc
|
||||
```
|
||||
For example, the following command specifies the ``NHWC`` layout for a Tensorflow
|
||||
``nasnet_large`` model that was exported to the ONNX format:
|
||||
|
||||
Additionally, if a model has more than one input or needs both input and output layouts specified, you need to provide the name of each input or output to apply the layout.
|
||||
.. code-block:: sh
|
||||
|
||||
For example, the following command specifies the layout for an ONNX `Yolo v3 Tiny` model with its first input `input_1` in `NCHW` layout and second input `image_shape` having two dimensions: batch and size of the image expressed as the `N?` layout:
|
||||
mo --input_model tf_nasnet_large.onnx --layout nhwc
|
||||
|
||||
```
|
||||
mo --input_model yolov3-tiny.onnx --layout input_1(nchw),image_shape(n?)
|
||||
```
|
||||
|
||||
## Changing Model Layout
|
||||
Additionally, if a model has more than one input or needs both input and output
|
||||
layouts specified, you need to provide the name of each input or output to apply the layout.
|
||||
|
||||
For example, the following command specifies the layout for an ONNX ``Yolo v3 Tiny``
|
||||
model with its first input ``input_1`` in ``NCHW`` layout and second input ``image_shape``
|
||||
having two dimensions: batch and size of the image expressed as the ``N?`` layout:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model yolov3-tiny.onnx --layout input_1(nchw),image_shape(n?)
|
||||
|
||||
|
||||
Changing Model Layout
|
||||
#####################
|
||||
|
||||
Changing the model layout may be necessary if it differs from the one presented by input data.
|
||||
Use either `--layout` or `--source_layout` with `--target_layout` to change the layout.
|
||||
Use either ``--layout`` or ``--source_layout`` with ``--target_layout`` to change the layout.
|
||||
|
||||
For example, for the same `nasnet_large` model mentioned previously, you can use the following commands to provide data in the `NCHW` layout:
|
||||
For example, for the same ``nasnet_large`` model mentioned previously, you can use
|
||||
the following commands to provide data in the ``NCHW`` layout:
|
||||
|
||||
```
|
||||
mo --input_model tf_nasnet_large.onnx --source_layout nhwc --target_layout nchw
|
||||
mo --input_model tf_nasnet_large.onnx --layout "nhwc->nchw"
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
Again, if a model has more than one input or needs both input and output layouts specified, you need to provide the name of each input or output to apply the layout.
|
||||
mo --input_model tf_nasnet_large.onnx --source_layout nhwc --target_layout nchw
|
||||
mo --input_model tf_nasnet_large.onnx --layout "nhwc->nchw"
|
||||
|
||||
For example, to provide data in the `NHWC` layout for the `Yolo v3 Tiny` model mentioned earlier, use the following commands:
|
||||
|
||||
```
|
||||
mo --input_model yolov3-tiny.onnx --source_layout "input_1(nchw),image_shape(n?)" --target_layout "input_1(nhwc)"
|
||||
mo --input_model yolov3-tiny.onnx --layout "input_1(nchw->nhwc),image_shape(n?)"
|
||||
```
|
||||
Again, if a model has more than one input or needs both input and output layouts
|
||||
specified, you need to provide the name of each input or output to apply the layout.
|
||||
|
||||
## Specifying Mean and Scale Values
|
||||
Neural network models are usually trained with the normalized input data. This means that the input data values are converted to be in a specific range,
|
||||
for example, `[0, 1]` or `[-1, 1]`. Sometimes, the mean values (mean images) are subtracted from the input data values as part of the preprocessing.
|
||||
For example, to provide data in the ``NHWC`` layout for the `Yolo v3 Tiny` model
|
||||
mentioned earlier, use the following commands:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model yolov3-tiny.onnx --source_layout "input_1(nchw),image_shape(n?)" --target_layout "input_1(nhwc)"
|
||||
mo --input_model yolov3-tiny.onnx --layout "input_1(nchw->nhwc),image_shape(n?)"
|
||||
|
||||
|
||||
Specifying Mean and Scale Values
|
||||
################################
|
||||
|
||||
Neural network models are usually trained with the normalized input data. This
|
||||
means that the input data values are converted to be in a specific range, for example,
|
||||
``[0, 1]`` or ``[-1, 1]``. Sometimes, the mean values (mean images) are subtracted
|
||||
from the input data values as part of the preprocessing.
|
||||
|
||||
There are two cases of how the input data preprocessing is implemented.
|
||||
* The input preprocessing operations are a part of a model.
|
||||
|
||||
In this case, the application does not perform a separate preprocessing step: everything is embedded into the model itself. Model Optimizer will generate the OpenVINO IR format with required preprocessing operations, and no `mean` and `scale` parameters are required.
|
||||
* The input preprocessing operations are not a part of a model and the preprocessing is performed within the application which feeds the model with input data.
|
||||
* The input preprocessing operations are a part of a model.
|
||||
|
||||
In this case, information about mean/scale values should be provided to Model Optimizer to embed it to the generated OpenVINO IR format.
|
||||
In this case, the application does not perform a separate preprocessing step:
|
||||
everything is embedded into the model itself. Model Optimizer will generate the
|
||||
OpenVINO IR format with required preprocessing operations, and no ``mean`` and
|
||||
``scale`` parameters are required.
|
||||
* The input preprocessing operations are not a part of a model and the preprocessing
|
||||
is performed within the application which feeds the model with input data.
|
||||
|
||||
Model Optimizer provides command-line parameters to specify the values: `--mean_values`, `--scale_values`, `--scale`.
|
||||
Using these parameters, Model Optimizer embeds the corresponding preprocessing block for mean-value normalization of the input data
|
||||
In this case, information about mean/scale values should be provided to Model
|
||||
Optimizer to embed it to the generated OpenVINO IR format.
|
||||
|
||||
Model Optimizer provides command-line parameters to specify the values: ``--mean_values``,
|
||||
``--scale_values``, ``--scale``. Using these parameters, Model Optimizer embeds the
|
||||
corresponding preprocessing block for mean-value normalization of the input data
|
||||
and optimizes this block so that the preprocessing takes negligible time for inference.
|
||||
|
||||
For example, the following command runs Model Optimizer for the PaddlePaddle UNet model and applies mean-scale normalization to the input data:
|
||||
For example, the following command runs Model Optimizer for the PaddlePaddle UNet
|
||||
model and applies mean-scale normalization to the input data:
|
||||
|
||||
```sh
|
||||
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
## Reversing Input Channels <a name="when_to_reverse_input_channels"></a>
|
||||
Sometimes, input images for your application can be of the RGB (or BGR) format and the model is trained on images of the BGR (or RGB) format,
|
||||
which is in the opposite order of color channels. In this case, it is important to preprocess the input images by reverting the color channels before inference.
|
||||
mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
|
||||
|
||||
To embed this preprocessing step into OpenVINO IR, Model Optimizer provides the `--reverse_input_channels` command-line parameter to shuffle the color channels.
|
||||
|
||||
The `--reverse_input_channels` parameter can be used to preprocess the model input in the following cases:
|
||||
* Only one dimension in the input shape has a size equal to 3.
|
||||
* One dimension has an undefined size and is marked as `C` channel using `layout` parameters.
|
||||
Reversing Input Channels
|
||||
########################
|
||||
|
||||
Using the `--reverse_input_channels` parameter, Model Optimizer embeds the corresponding preprocessing block for reverting
|
||||
the input data along channel dimension and optimizes this block so that the preprocessing takes only negligible time for inference.
|
||||
Sometimes, input images for your application can be of the RGB (or BGR) format
|
||||
and the model is trained on images of the BGR (or RGB) format, which is in the
|
||||
opposite order of color channels. In this case, it is important to preprocess the
|
||||
input images by reverting the color channels before inference.
|
||||
|
||||
For example, the following command launches Model Optimizer for the TensorFlow AlexNet model and embeds the `reverse_input_channel` preprocessing block into OpenVINO IR:
|
||||
To embed this preprocessing step into OpenVINO IR, Model Optimizer provides the
|
||||
``--reverse_input_channels`` command-line parameter to shuffle the color channels.
|
||||
|
||||
```sh
|
||||
mo --input_model alexnet.pb --reverse_input_channels
|
||||
```
|
||||
The ``--reverse_input_channels`` parameter can be used to preprocess the model
|
||||
input in the following cases:
|
||||
|
||||
> **NOTE**: If both mean and scale values are specified, the mean is subtracted first and then the scale is applied regardless of the order of options
|
||||
in the command-line. Input values are *divided* by the scale value(s). If the `--reverse_input_channels` option is also used, `reverse_input_channels`
|
||||
will be applied first, then `mean` and after that `scale`. The data flow in the model looks as follows:
|
||||
`Parameter -> ReverseInputChannels -> Mean apply-> Scale apply -> the original body of the model`.
|
||||
* Only one dimension in the input shape has a size equal to ``3``.
|
||||
* One dimension has an undefined size and is marked as ``C`` channel using ``layout`` parameters.
|
||||
|
||||
Using the ``--reverse_input_channels`` parameter, Model Optimizer embeds the corresponding
|
||||
preprocessing block for reverting the input data along channel dimension and optimizes
|
||||
this block so that the preprocessing takes only negligible time for inference.
|
||||
|
||||
For example, the following command launches Model Optimizer for the TensorFlow AlexNet
|
||||
model and embeds the ``reverse_input_channel`` preprocessing block into OpenVINO IR:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model alexnet.pb --reverse_input_channels
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If both mean and scale values are specified, the mean is subtracted first and
|
||||
then the scale is applied regardless of the order of options in the command-line.
|
||||
Input values are *divided* by the scale value(s). If the ``--reverse_input_channels``
|
||||
option is also used, ``reverse_input_channels`` will be applied first, then ``mean``
|
||||
and after that ``scale``. The data flow in the model looks as follows:
|
||||
``Parameter -> ReverseInputChannels -> Mean apply-> Scale apply -> the original body of the model``.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
* :doc:`Overview of Preprocessing API <openvino_docs_OV_UG_Preprocessing_Overview>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Additional Resources
|
||||
* [Overview of Preprocessing API](../../OV_Runtime_UG/preprocessing_overview.md)
|
||||
|
||||
@@ -1,20 +1,29 @@
|
||||
# Compressing a Model to FP16 {#openvino_docs_MO_DG_FP16_Compression}
|
||||
|
||||
Model Optimizer can convert all floating-point weights to `FP16` data type. The resulting IR is called
|
||||
compressed `FP16` model. The resulting model will occupy about twice as less space in the file system,
|
||||
but it may have some accuracy drop. For most models, the accuracy drop is negligible.
|
||||
@sphinxdirective
|
||||
|
||||
To compress the model, use the `--compress_to_fp16` option:
|
||||
> **NOTE**: Starting from the 2022.3 release, option --data_type is deprecated.
|
||||
> Instead of --data_type FP16 use --compress_to_fp16.
|
||||
> Using `--data_type FP32` will give no result and will not force `FP32` precision in
|
||||
> the model. If the model has `FP16` constants, such constants will have `FP16` precision in IR as well.
|
||||
Model Optimizer by default converts all floating-point weights to ``FP16`` data type.
|
||||
The resulting IR is called compressed ``FP16`` model. The resulting model will occupy
|
||||
about twice as less space in the file system, but it may have some accuracy drop.
|
||||
For most models, the accuracy drop is negligible. But in case if accuracy drop is
|
||||
significant user can disable compression explicitly.
|
||||
|
||||
```
|
||||
mo --input_model INPUT_MODEL --compress_to_fp16
|
||||
```
|
||||
By default, models are compressed to ``FP16``, but you can disable compression by
|
||||
specifying ``--compress_to_fp16=False``:
|
||||
|
||||
For details on how plugins handle compressed `FP16` models, see [Working with devices](../../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
|
||||
.. code-block:: sh
|
||||
|
||||
> **NOTE**: `FP16` compression is sometimes used as the initial step for `INT8` quantization.
|
||||
> Refer to the [Post-training optimization](../../../tools/pot/docs/Introduction.md) guide for more information about that.
|
||||
mo --input_model INPUT_MODEL --compress_to_fp16=False
|
||||
|
||||
|
||||
For details on how plugins handle compressed ``FP16`` models, see
|
||||
:doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.
|
||||
|
||||
.. note::
|
||||
|
||||
``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
|
||||
Refer to the :doc:`Post-training optimization <pot_introduction>` guide for more
|
||||
information about that.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,109 +1,168 @@
|
||||
# Getting Performance Numbers {#openvino_docs_MO_DG_Getting_Performance_Numbers}
|
||||
|
||||
This guide explains how to use the benchmark_app to get performance numbers. It also explains how the performance numbers are reflected through internal inference performance counters and execution graphs. It also includes information on using ITT and Intel® VTune™ Profiler to get performance insights.
|
||||
|
||||
## Test performance with the benchmark_app
|
||||
@sphinxdirective
|
||||
|
||||
### Prerequisites
|
||||
This guide explains how to use the benchmark_app to get performance numbers. It also explains how the performance
|
||||
numbers are reflected through internal inference performance counters and execution graphs. It also includes
|
||||
information on using ITT and Intel® VTune™ Profiler to get performance insights.
|
||||
|
||||
To run benchmarks, you need both OpenVINO developer tools and Runtime installed. Follow the [Installation guide](../../install_guides/installing-model-dev-tools.md) and make sure to install the latest general release package with support for frameworks of the models you want to test.
|
||||
Test performance with the benchmark_app
|
||||
###########################################################
|
||||
|
||||
To test performance of your model, make sure you [prepare the model for use with OpenVINO](../../Documentation/model_introduction.md). For example, if you use [OpenVINO's automation tools](@ref omz_tools_downloader), these two lines of code will download the resnet-50-tf and convert it to OpenVINO IR.
|
||||
Prerequisites
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
To run benchmarks, you need both OpenVINO developer tools and Runtime installed. Follow the
|
||||
:doc:`Installation guide <openvino_docs_install_guides_install_dev_tools>` and make sure to install the latest
|
||||
general release package with support for frameworks of the models you want to test.
|
||||
|
||||
To test performance of your model, make sure you :doc:`prepare the model for use with OpenVINO <openvino_docs_model_processing_introduction>`.
|
||||
For example, if you use :doc:`OpenVINO's automation tools <omz_tools_downloader>`, these two lines of code will download the
|
||||
resnet-50-tf and convert it to OpenVINO IR.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
```bash
|
||||
omz_downloader --name resnet-50-tf
|
||||
omz_converter --name resnet-50-tf
|
||||
```
|
||||
|
||||
### Running the benchmark application
|
||||
|
||||
For a detailed description, see the dedicated articles: [benchmark_app for C++](../../../samples/cpp/benchmark_app/README.md) and [benchmark_app for Python](../../../tools/benchmark_tool/README.md).
|
||||
Running the benchmark application
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
For a detailed description, see the dedicated articles:
|
||||
:doc:`benchmark_app for C++ <openvino_inference_engine_samples_benchmark_app_README>` and
|
||||
:doc:`benchmark_app for Python <openvino_inference_engine_tools_benchmark_tool_README>`.
|
||||
|
||||
The benchmark_app includes a lot of device-specific options, but the primary usage is as simple as:
|
||||
|
||||
```bash
|
||||
.. code-block:: bash
|
||||
|
||||
benchmark_app -m <model> -d <device> -i <input>
|
||||
```
|
||||
|
||||
Each of the [OpenVINO supported devices](../../OV_Runtime_UG/supported_plugins/Supported_Devices.md) offers performance settings that contain command-line equivalents in the Benchmark app.
|
||||
|
||||
While these settings provide really low-level control for the optimal model performance on the _specific_ device, it is recommended to always start performance evaluation with the [OpenVINO High-Level Performance Hints](../../OV_Runtime_UG/performance_hints.md) first, like so:
|
||||
Each of the :doc:`OpenVINO supported devices <openvino_docs_OV_UG_supported_plugins_Supported_Devices>` offers
|
||||
performance settings that contain command-line equivalents in the Benchmark app.
|
||||
|
||||
While these settings provide really low-level control for the optimal model performance on the *specific* device,
|
||||
it is recommended to always start performance evaluation with the :doc:`OpenVINO High-Level Performance Hints <openvino_docs_OV_UG_Performance_Hints>` first, like so:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
```bash
|
||||
# for throughput prioritization
|
||||
benchmark_app -hint tput -m <model> -d <device>
|
||||
# for latency prioritization
|
||||
benchmark_app -hint latency -m <model> -d <device>
|
||||
```
|
||||
|
||||
## Additional benchmarking considerations
|
||||
|
||||
### 1 - Select a Proper Set of Operations to Measure
|
||||
Additional benchmarking considerations
|
||||
###########################################################
|
||||
|
||||
1 - Select a Proper Set of Operations to Measure
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
When evaluating performance of a model with OpenVINO Runtime, it is required to measure a proper set of operations.
|
||||
|
||||
- Avoid including one-time costs such as model loading.
|
||||
- Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.
|
||||
|
||||
> **NOTE**: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to [Embedding the Pre-processing](Additional_Optimizations.md) and [General Runtime Optimizations](../../optimization_guide/dldt_deployment_optimization_common.md).
|
||||
|
||||
### 2 - Try to Get Credible Data
|
||||
.. note::
|
||||
|
||||
Performance conclusions should be build upon reproducible data. As for the performance measurements, they should be done with a large number of invocations of the same routine. Since the first iteration is almost always significantly slower than the subsequent ones, an aggregated value can be used for the execution time for final projections:
|
||||
|
||||
- If the warm-up run does not help or execution time still varies, you can try running a large number of iterations and then average or find a mean of the results.
|
||||
- If the time values range too much, consider geomean.
|
||||
- Be aware of the throttling and other power oddities. A device can exist in one of several different power states. When optimizing your model, consider fixing the device frequency for better performance data reproducibility. However, the end-to-end (application) benchmarking should also be performed under real operational conditions.
|
||||
Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information,
|
||||
refer to :doc:`Embedding Pre-processing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` and
|
||||
:doc:`General Runtime Optimizations <openvino_docs_deployment_optimization_guide_common>`.
|
||||
|
||||
|
||||
### 3 - Compare Performance with Native/Framework Code
|
||||
2 - Try to Get Credible Data
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Performance conclusions should be build upon reproducible data. As for the performance measurements, they should
|
||||
be done with a large number of invocations of the same routine. Since the first iteration is almost always significantly
|
||||
slower than the subsequent ones, an aggregated value can be used for the execution time for final projections:
|
||||
|
||||
- If the warm-up run does not help or execution time still varies, you can try running a large number of iterations
|
||||
and then average or find a mean of the results.
|
||||
- If the time values range too much, consider geomean.
|
||||
- Be aware of the throttling and other power oddities. A device can exist in one of several different power states.
|
||||
When optimizing your model, consider fixing the device frequency for better performance data reproducibility.
|
||||
However, the end-to-end (application) benchmarking should also be performed under real operational conditions.
|
||||
|
||||
|
||||
3 - Compare Performance with Native/Framework Code
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
When comparing the OpenVINO Runtime performance with the framework or another reference code, make sure that both versions are as similar as possible:
|
||||
|
||||
- Wrap the exact inference execution (refer to the [Benchmark app](../../../samples/cpp/benchmark_app/README.md) for examples).
|
||||
- Wrap the exact inference execution (for examples, see :doc:`Benchmark app <openvino_inference_engine_samples_benchmark_app_README>`).
|
||||
- Do not include model loading time.
|
||||
- Ensure that the inputs are identical for OpenVINO Runtime and the framework. For example, watch out for random values that can be used to populate the inputs.
|
||||
- In situations when any user-side pre-processing should be tracked separately, consider [image pre-processing and conversion](../../OV_Runtime_UG/preprocessing_overview.md).
|
||||
- When applicable, leverage the [Dynamic Shapes support](../../OV_Runtime_UG/ov_dynamic_shapes.md).
|
||||
- If possible, demand the same accuracy. For example, TensorFlow allows `FP16` execution, so when comparing to that, make sure to test the OpenVINO Runtime with the `FP16` as well.
|
||||
- In situations when any user-side pre-processing should be tracked separately, consider :doc:`image pre-processing and conversion <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
- When applicable, leverage the :doc:`Dynamic Shapes support <openvino_docs_OV_UG_DynamicShapes>`.
|
||||
- If possible, demand the same accuracy. For example, TensorFlow allows ``FP16`` execution, so when comparing to that, make sure to test the OpenVINO Runtime with the ``FP16`` as well.
|
||||
|
||||
### Internal Inference Performance Counters and Execution Graphs <a name="performance-counters"></a>
|
||||
Internal Inference Performance Counters and Execution Graphs
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
More detailed insights into inference performance breakdown can be achieved with device-specific performance counters and/or execution graphs.
|
||||
Both [C++](../../../samples/cpp/benchmark_app/README.md) and [Python](../../../tools/benchmark_tool/README.md) versions of the `benchmark_app` support a `-pc` command-line parameter that outputs internal execution breakdown.
|
||||
Both :doc:`C++ <openvino_inference_engine_samples_benchmark_app_README>` and :doc:`Python <openvino_inference_engine_tools_benchmark_tool_README>`
|
||||
versions of the *benchmark_app* support a ``-pc`` command-line parameter that outputs internal execution breakdown.
|
||||
|
||||
For example, the table shown below is part of performance counters for quantized [TensorFlow implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on [CPU Plugin](../../OV_Runtime_UG/supported_plugins/CPU.md).
|
||||
Keep in mind that since the device is CPU, the `realTime` wall clock and the `cpu` time layers are the same. Information about layer precision is also stored in the performance counters.
|
||||
|
||||
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |
|
||||
| --------------------------------------------------------- | ---------- | ------------ | -------------------- | ------------- | ------------ |
|
||||
| resnet\_model/batch\_normalization\_15/FusedBatchNorm/Add | EXECUTED | Convolution | jit\_avx512\_1x1\_I8 | 0.377 | 0.377 |
|
||||
| resnet\_model/conv2d\_16/Conv2D/fq\_input\_0 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
|
||||
| resnet\_model/batch\_normalization\_16/FusedBatchNorm/Add | EXECUTED | Convolution | jit\_avx512\_I8 | 0.499 | 0.499 |
|
||||
| resnet\_model/conv2d\_17/Conv2D/fq\_input\_0 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
|
||||
| resnet\_model/batch\_normalization\_17/FusedBatchNorm/Add | EXECUTED | Convolution | jit\_avx512\_1x1\_I8 | 0.399 | 0.399 |
|
||||
| resnet\_model/add\_4/fq\_input\_0 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
|
||||
| resnet\_model/add\_4 | NOT\_RUN | Eltwise | undef | 0 | 0 |
|
||||
| resnet\_model/add\_5/fq\_input\_1 | NOT\_RUN | FakeQuantize | undef | 0 | 0 |
|
||||
For example, the table shown below is part of performance counters for quantized
|
||||
`TensorFlow implementation of ResNet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__
|
||||
model inference on :doc:`CPU Plugin <openvino_docs_OV_UG_supported_plugins_CPU>`.
|
||||
Keep in mind that since the device is CPU, the ``realTime`` wall clock and the ``cpu`` time layers are the same.
|
||||
Information about layer precision is also stored in the performance counters.
|
||||
|
||||
|
||||
The `exeStatus` column of the table includes the following possible values:
|
||||
- `EXECUTED` - the layer was executed by standalone primitive.
|
||||
- `NOT_RUN` - the layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
|
||||
|
||||
The `execType` column of the table includes inference primitives with specific suffixes. The layers could have the following marks:
|
||||
* The `I8` suffix is for layers that had 8-bit data type input and were computed in 8-bit precision.
|
||||
* The `FP32` suffix is for layers computed in 32-bit precision.
|
||||
=========================================================== ============= ============== ===================== ================= ==============
|
||||
layerName execStatus layerType execType realTime (ms) cpuTime (ms)
|
||||
=========================================================== ============= ============== ===================== ================= ==============
|
||||
resnet\_model/batch\_normalization\_15/FusedBatchNorm/Add EXECUTED Convolution jit\_avx512\_1x1\_I8 0.377 0.377
|
||||
resnet\_model/conv2d\_16/Conv2D/fq\_input\_0 NOT\_RUN FakeQuantize undef 0 0
|
||||
resnet\_model/batch\_normalization\_16/FusedBatchNorm/Add EXECUTED Convolution jit\_avx512\_I8 0.499 0.499
|
||||
resnet\_model/conv2d\_17/Conv2D/fq\_input\_0 NOT\_RUN FakeQuantize undef 0 0
|
||||
resnet\_model/batch\_normalization\_17/FusedBatchNorm/Add EXECUTED Convolution jit\_avx512\_1x1\_I8 0.399 0.399
|
||||
resnet\_model/add\_4/fq\_input\_0 NOT\_RUN FakeQuantize undef 0 0
|
||||
resnet\_model/add\_4 NOT\_RUN Eltwise undef 0 0
|
||||
resnet\_model/add\_5/fq\_input\_1 NOT\_RUN FakeQuantize undef 0 0
|
||||
=========================================================== ============= ============== ===================== ================= ==============
|
||||
|
||||
All `Convolution` layers are executed in `int8` precision. The rest of the layers are fused into Convolutions using post-operation optimization, as described in [CPU Device](../../OV_Runtime_UG/supported_plugins/CPU.md).
|
||||
This contains layer names (as seen in OpenVINO IR), type of the layer, and execution statistics.
|
||||
| The ``exeStatus`` column of the table includes the following possible values:
|
||||
| - ``EXECUTED`` - the layer was executed by standalone primitive.
|
||||
| - ``NOT_RUN`` - the layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
|
||||
|
|
||||
| The ``execType`` column of the table includes inference primitives with specific suffixes. The layers could have the following marks:
|
||||
| - The ``I8`` suffix is for layers that had 8-bit data type input and were computed in 8-bit precision.
|
||||
| - The ``FP32`` suffix is for layers computed in 32-bit precision.
|
||||
|
|
||||
| All ``Convolution`` layers are executed in ``int8`` precision. The rest of the layers are fused into Convolutions using post-operation optimization,
|
||||
as described in :doc:`CPU Device <openvino_docs_OV_UG_supported_plugins_CPU>`. This contains layer names
|
||||
(as seen in OpenVINO IR), type of the layer, and execution statistics.
|
||||
|
||||
Both `benchmark_app` versions also support the `exec_graph_path` command-line option. It requires OpenVINO to output the same execution statistics per layer, but in the form of plugin-specific [Netron-viewable](https://netron.app/) graph to the specified file.
|
||||
|
||||
Especially when performance-debugging the [latency](../../optimization_guide/dldt_deployment_optimization_latency.md), note that the counters do not reflect the time spent in the `plugin/device/driver/etc` queues. If the sum of the counters is too different from the latency of an inference request, consider testing with less inference requests. For example, running single [OpenVINO stream](../../optimization_guide/dldt_deployment_optimization_tput.md) with multiple requests would produce nearly identical counters as running a single inference request, while the actual latency can be quite different.
|
||||
Both *benchmark_app* versions also support the ``exec_graph_path`` command-line option. It requires OpenVINO to output the same execution
|
||||
statistics per layer, but in the form of plugin-specific `Netron-viewable <https://netron.app/>`__ graph to the specified file.
|
||||
|
||||
Lastly, the performance statistics with both performance counters and execution graphs are averaged, so such data for the [inputs of dynamic shapes](../../OV_Runtime_UG/ov_dynamic_shapes.md) should be measured carefully, preferably by isolating the specific shape and executing multiple times in a loop, to gather the reliable data.
|
||||
Especially when performance-debugging the :doc:`latency <openvino_docs_deployment_optimization_guide_latency>`, note that the counters
|
||||
do not reflect the time spent in the ``plugin/device/driver/etc`` queues. If the sum of the counters is too different from the latency
|
||||
of an inference request, consider testing with less inference requests. For example, running single
|
||||
:doc:`OpenVINO stream <openvino_docs_deployment_optimization_guide_tput>` with multiple requests would produce nearly identical
|
||||
counters as running a single inference request, while the actual latency can be quite different.
|
||||
|
||||
Lastly, the performance statistics with both performance counters and execution graphs are averaged,
|
||||
so such data for the :doc:`inputs of dynamic shapes <openvino_docs_OV_UG_DynamicShapes>` should be measured carefully,
|
||||
preferably by isolating the specific shape and executing multiple times in a loop, to gather reliable data.
|
||||
|
||||
Use ITT to Get Performance Insights
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
In general, OpenVINO and its individual plugins are heavily instrumented with Intel® Instrumentation and Tracing Technology (ITT).
|
||||
Therefore, you can also compile OpenVINO from the source code with ITT enabled and use tools like
|
||||
`Intel® VTune™ Profiler <https://software.intel.com/en-us/vtune>`__ to get detailed inference performance breakdown and additional
|
||||
insights in the application-level performance on the timeline view.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
### Use ITT to Get Performance Insights
|
||||
|
||||
In general, OpenVINO and its individual plugins are heavily instrumented with Intel® Instrumentation and Tracing Technology (ITT). Therefore, you can also compile OpenVINO from the source code with ITT enabled and use tools like [Intel® VTune™ Profiler](https://software.intel.com/en-us/vtune) to get detailed inference performance breakdown and additional insights in the application-level performance on the timeline view.
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,85 +1,99 @@
|
||||
# Converting a Caffe Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe}
|
||||
|
||||
<a name="Convert_From_Caffe"></a>To convert a Caffe model, run Model Optimizer with the path to the input model `.caffemodel` file:
|
||||
@sphinxdirective
|
||||
|
||||
To convert a Caffe model, run Model Optimizer with the path to the input model ``.caffemodel`` file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model <INPUT_MODEL>.caffemodel
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.caffemodel
|
||||
```
|
||||
|
||||
The following list provides the Caffe-specific parameters.
|
||||
|
||||
```
|
||||
Caffe-specific parameters:
|
||||
--input_proto INPUT_PROTO, -d INPUT_PROTO
|
||||
Deploy-ready prototxt file that contains a topology
|
||||
structure and layer attributes
|
||||
--caffe_parser_path CAFFE_PARSER_PATH
|
||||
Path to python Caffe parser generated from caffe.proto
|
||||
-k K Path to CustomLayersMapping.xml to register custom
|
||||
layers
|
||||
--disable_omitting_optional
|
||||
Disable omitting optional attributes to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
all attributes of a custom layer to IR. Default
|
||||
behavior is to transfer the attributes with default
|
||||
values and the attributes defined by the user to IR.
|
||||
--enable_flattening_nested_params
|
||||
Enable flattening optional params to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
attributes of a custom layer to IR with flattened
|
||||
nested parameters. Default behavior is to transfer the
|
||||
attributes without flattening nested parameters.
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
### CLI Examples Using Caffe-Specific Parameters
|
||||
Caffe-specific parameters:
|
||||
--input_proto INPUT_PROTO, -d INPUT_PROTO
|
||||
Deploy-ready prototxt file that contains a topology
|
||||
structure and layer attributes
|
||||
--caffe_parser_path CAFFE_PARSER_PATH
|
||||
Path to python Caffe parser generated from caffe.proto
|
||||
-k K Path to CustomLayersMapping.xml to register custom
|
||||
layers
|
||||
--disable_omitting_optional
|
||||
Disable omitting optional attributes to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
all attributes of a custom layer to IR. Default
|
||||
behavior is to transfer the attributes with default
|
||||
values and the attributes defined by the user to IR.
|
||||
--enable_flattening_nested_params
|
||||
Enable flattening optional params to be used for
|
||||
custom layers. Use this option if you want to transfer
|
||||
attributes of a custom layer to IR with flattened
|
||||
nested parameters. Default behavior is to transfer the
|
||||
attributes without flattening nested parameters.
|
||||
|
||||
* Launching Model Optimizer for [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `prototxt` file.
|
||||
This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
|
||||
```
|
||||
* Launching Model Optimizer for [bvlc_alexnet.caffemodel](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) with a specified `CustomLayersMapping` file.
|
||||
This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer.
|
||||
Example of `CustomLayersMapping.xml` can be found in `<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example`. The optional parameters without default values and not specified by the user in the `.prototxt` file are removed from the Intermediate Representation, and nested parameters are flattened:
|
||||
```sh
|
||||
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
|
||||
```
|
||||
This example shows a multi-input model with input layers: `data`, `rois`
|
||||
```
|
||||
layer {
|
||||
name: "data"
|
||||
type: "Input"
|
||||
top: "data"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 3 dim: 224 dim: 224 }
|
||||
}
|
||||
}
|
||||
layer {
|
||||
name: "rois"
|
||||
type: "Input"
|
||||
top: "rois"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 5 dim: 1 dim: 1 }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* Launching the Model Optimizer for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the Model Optimizer. In particular, for data, set the shape to `1,3,227,227`. For rois, set the shape to `1,6,1,1`:
|
||||
```sh
|
||||
mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),[1,6,1,1]
|
||||
```
|
||||
## Custom Layer Definition
|
||||
CLI Examples Using Caffe-Specific Parameters
|
||||
++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
* Launching Model Optimizer for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `prototxt` file. This is needed when the name of the Caffe model and the `.prototxt` file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input `model.caffemodel` file.
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model bvlc_alexnet.caffemodel --input_proto bvlc_alexnet.prototxt
|
||||
|
||||
* Launching Model Optimizer for `bvlc_alexnet.caffemodel <https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet>`__ with a specified `CustomLayersMapping` file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires the Caffe system on the computer. Example of ``CustomLayersMapping.xml`` can be found in ``<OPENVINO_INSTALLATION_DIR>/mo/front/caffe/CustomLayersMapping.xml.example``. The optional parameters without default values and not specified by the user in the ``.prototxt`` file are removed from the Intermediate Representation, and nested parameters are flattened:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model bvlc_alexnet.caffemodel -k CustomLayersMapping.xml --disable_omitting_optional --enable_flattening_nested_params
|
||||
|
||||
This example shows a multi-input model with input layers: ``data``, ``rois``
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
layer {
|
||||
name: "data"
|
||||
type: "Input"
|
||||
top: "data"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 3 dim: 224 dim: 224 }
|
||||
}
|
||||
}
|
||||
layer {
|
||||
name: "rois"
|
||||
type: "Input"
|
||||
top: "rois"
|
||||
input_param {
|
||||
shape { dim: 1 dim: 5 dim: 1 dim: 1 }
|
||||
}
|
||||
}
|
||||
|
||||
* Launching the Model Optimizer for a multi-input model with two inputs and providing a new shape for each input in the order they are passed to the Model Optimizer. In particular, for data, set the shape to ``1,3,227,227``. For rois, set the shape to ``1,6,1,1``:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model /path-to/your-model.caffemodel --input data,rois --input_shape (1,3,227,227),[1,6,1,1]
|
||||
|
||||
Custom Layer Definition
|
||||
########################
|
||||
|
||||
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
|
||||
|
||||
## Supported Caffe Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
Supported Caffe Layers
|
||||
#######################
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
## Summary
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
|
||||
Summary
|
||||
#######
|
||||
|
||||
In this document, you learned:
|
||||
|
||||
@@ -87,5 +101,10 @@ In this document, you learned:
|
||||
* Which Caffe models are supported.
|
||||
* How to convert a trained Caffe model by using Model Optimizer with both framework-agnostic and Caffe-specific command-line options.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific Caffe models.
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific Caffe models.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,65 +1,86 @@
|
||||
# Converting a Kaldi Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi}
|
||||
|
||||
> **NOTE**: Model Optimizer supports the [nnet1](http://kaldi-asr.org/doc/dnn1.html) and [nnet2](http://kaldi-asr.org/doc/dnn2.html) formats of Kaldi models. The support of the [nnet3](http://kaldi-asr.org/doc/dnn3.html) format is limited.
|
||||
@sphinxdirective
|
||||
|
||||
.. note::
|
||||
|
||||
Model Optimizer supports the `nnet1 <http://kaldi-asr.org/doc/dnn1.html>`__ and `nnet2 <http://kaldi-asr.org/doc/dnn2.html>`__ formats of Kaldi models. The support of the `nnet3 <http://kaldi-asr.org/doc/dnn3.html>`__ format is limited.
|
||||
|
||||
<a name="Convert_From_Kaldi"></a>To convert a Kaldi model, run Model Optimizer with the path to the input model `.nnet` or `.mdl` file:
|
||||
To convert a Kaldi model, run Model Optimizer with the path to the input model ``.nnet`` or ``.mdl`` file:
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.nnet
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
## Using Kaldi-Specific Conversion Parameters <a name="kaldi_specific_conversion_params"></a>
|
||||
mo --input_model <INPUT_MODEL>.nnet
|
||||
|
||||
Using Kaldi-Specific Conversion Parameters
|
||||
##########################################
|
||||
|
||||
The following list provides the Kaldi-specific parameters.
|
||||
|
||||
```sh
|
||||
Kaldi-specific parameters:
|
||||
--counts COUNTS A file name with full path to the counts file or empty string to utilize count values from the model file
|
||||
--remove_output_softmax
|
||||
Removes the Softmax that is the output layer
|
||||
--remove_memory Remove the Memory layer and add new inputs and outputs instead
|
||||
```
|
||||
.. code-block:: cpp
|
||||
|
||||
## Examples of CLI Commands
|
||||
Kaldi-specific parameters:
|
||||
--counts COUNTS A file name with full path to the counts file or empty string to utilize count values from the model file
|
||||
--remove_output_softmax
|
||||
Removes the Softmax that is the output layer
|
||||
--remove_memory Remove the Memory layer and add new inputs and outputs instead
|
||||
|
||||
* To launch Model Optimizer for the `wsj_dnn5b_smbr` model with the specified `.nnet` file:
|
||||
```sh
|
||||
mo --input_model wsj_dnn5b_smbr.nnet
|
||||
```
|
||||
Examples of CLI Commands
|
||||
########################
|
||||
|
||||
* To launch Model Optimizer for the `wsj_dnn5b_smbr` model with the existing file that contains counts for the last layer with biases:
|
||||
```sh
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts
|
||||
```
|
||||
* To launch Model Optimizer for the ``wsj_dnn5b_smbr`` model with the specified ``.nnet`` file:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model wsj_dnn5b_smbr.nnet
|
||||
|
||||
* To launch Model Optimizer for the ``wsj_dnn5b_smbr`` model with the existing file that contains counts for the last layer with biases:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts
|
||||
|
||||
|
||||
* The Model Optimizer normalizes сounts in the following way:
|
||||
\f[
|
||||
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
|
||||
\f]
|
||||
\f[
|
||||
C_{i}=log(S*C_{i})
|
||||
\f]
|
||||
where \f$C\f$ - the counts array, \f$C_{i} - i^{th}\f$ element of the counts array,
|
||||
\f$|C|\f$ - number of elements in the counts array;
|
||||
|
||||
.. math::
|
||||
|
||||
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
|
||||
|
||||
.. math::
|
||||
|
||||
C_{i}=log(S\*C_{i})
|
||||
|
||||
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;
|
||||
|
||||
* The normalized counts are subtracted from biases of the last or next to last layer (if last layer is SoftMax).
|
||||
|
||||
.. note:: Model Optimizer will show a warning if a model contains values of counts and the `--counts` option is not used.
|
||||
|
||||
> **NOTE**: Model Optimizer will show a warning if a model contains values of counts and the `--counts` option is not used.
|
||||
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the `--remove_output_softmax` flag:
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
* If you want to remove the last SoftMax layer in the topology, launch the Model Optimizer with the
|
||||
`--remove_output_softmax` flag:
|
||||
```sh
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
|
||||
```
|
||||
|
||||
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||
The Model Optimizer finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||
|
||||
> **NOTE**: Model Optimizer can remove SoftMax layer only if the topology has one output.
|
||||
.. note:: Model Optimizer can remove SoftMax layer only if the topology has one output.
|
||||
|
||||
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the `--output` option.
|
||||
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``--output`` option.
|
||||
|
||||
## Supported Kaldi Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers ](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
Supported Kaldi Layers
|
||||
######################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific Kaldi models. Here are some examples:
|
||||
|
||||
* :doc:`Convert Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model <openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model>`
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific Kaldi models. Here are some examples:
|
||||
* [Convert Kaldi ASpIRE Chain Time Delay Neural Network (TDNN) Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model)
|
||||
|
||||
@@ -1,51 +1,61 @@
|
||||
# Converting an MXNet Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet}
|
||||
|
||||
<a name="ConvertMxNet"></a>To convert an MXNet model, run Model Optimizer with the path to the *`.params`* file of the input model:
|
||||
@sphinxdirective
|
||||
|
||||
```sh
|
||||
mo --input_model model-file-0000.params
|
||||
```
|
||||
To convert an MXNet model, run Model Optimizer with the path to the ``.params`` file of the input model:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model model-file-0000.params
|
||||
|
||||
|
||||
Using MXNet-Specific Conversion Parameters
|
||||
##########################################
|
||||
|
||||
## Using MXNet-Specific Conversion Parameters <a name="mxnet_specific_conversion_params"></a>
|
||||
The following list provides the MXNet-specific parameters.
|
||||
|
||||
```
|
||||
MXNet-specific parameters:
|
||||
--input_symbol <SYMBOL_FILE_NAME>
|
||||
Symbol file (for example, "model-symbol.json") that contains a topology structure and layer attributes
|
||||
--nd_prefix_name <ND_PREFIX_NAME>
|
||||
Prefix name for args.nd and argx.nd files
|
||||
--pretrained_model_name <PRETRAINED_MODEL_NAME>
|
||||
Name of a pre-trained MXNet model without extension and epoch
|
||||
number. This model will be merged with args.nd and argx.nd
|
||||
files
|
||||
--save_params_from_nd
|
||||
Enable saving built parameters file from .nd files
|
||||
--legacy_mxnet_model
|
||||
Enable Apache MXNet loader to make a model compatible with the latest Apache MXNet version.
|
||||
Use only if your model was trained with Apache MXNet version lower than 1.0.0
|
||||
--enable_ssd_gluoncv
|
||||
Enable transformation for converting the gluoncv ssd topologies.
|
||||
Use only if your topology is one of ssd gluoncv topologies
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
> **NOTE**: By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest
|
||||
> version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the
|
||||
> `--legacy_mxnet_model` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually
|
||||
> recompile Apache MXNet with custom layers and install it in your environment.
|
||||
MXNet-specific parameters:
|
||||
--input_symbol <SYMBOL_FILE_NAME>
|
||||
Symbol file (for example, "model-symbol.json") that contains a topology structure and layer attributes
|
||||
--nd_prefix_name <ND_PREFIX_NAME>
|
||||
Prefix name for args.nd and argx.nd files
|
||||
--pretrained_model_name <PRETRAINED_MODEL_NAME>
|
||||
Name of a pre-trained MXNet model without extension and epoch
|
||||
number. This model will be merged with args.nd and argx.nd
|
||||
files
|
||||
--save_params_from_nd
|
||||
Enable saving built parameters file from .nd files
|
||||
--legacy_mxnet_model
|
||||
Enable Apache MXNet loader to make a model compatible with the latest Apache MXNet version.
|
||||
Use only if your model was trained with Apache MXNet version lower than 1.0.0
|
||||
--enable_ssd_gluoncv
|
||||
Enable transformation for converting the gluoncv ssd topologies.
|
||||
Use only if your topology is one of ssd gluoncv topologies
|
||||
|
||||
## Custom Layer Definition
|
||||
|
||||
.. note::
|
||||
|
||||
By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the ``--legacy_mxnet_model`` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually recompile Apache MXNet with custom layers and install it in your environment.
|
||||
|
||||
Custom Layer Definition
|
||||
#######################
|
||||
|
||||
Internally, when you run Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list. If your topology contains such kind of layers, Model Optimizer classifies them as custom.
|
||||
|
||||
## Supported MXNet Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
Supported MXNet Layers
|
||||
#######################
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ) which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
## Summary
|
||||
Model Optimizer provides explanatory messages when it is unable to complete conversions due to typographical errors, incorrectly used options, or other issues. A message describes the potential cause of the problem and gives a link to :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>` which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections to help you understand what went wrong.
|
||||
|
||||
Summary
|
||||
########
|
||||
|
||||
In this document, you learned:
|
||||
|
||||
@@ -53,7 +63,13 @@ In this document, you learned:
|
||||
* Which MXNet models are supported.
|
||||
* How to convert a trained MXNet model by using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific MXNet models. Here are some examples:
|
||||
* [Convert MXNet GluonCV Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models)
|
||||
* [Convert MXNet Style Transfer Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet)
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific MXNet models. Here are some examples:
|
||||
|
||||
* :doc:`Convert MXNet GluonCV Model <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models>`
|
||||
* :doc:`Convert MXNet Style Transfer Model <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,28 +1,40 @@
|
||||
# Converting an ONNX Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX}
|
||||
|
||||
## Introduction to ONNX
|
||||
[ONNX](https://github.com/onnx/onnx) is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
|
||||
@sphinxdirective
|
||||
|
||||
## Converting an ONNX Model <a name="Convert_From_ONNX"></a>
|
||||
Introduction to ONNX
|
||||
####################
|
||||
|
||||
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the [installation instructions](@ref openvino_docs_install_guides_install_dev_tools).
|
||||
`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.
|
||||
|
||||
Converting an ONNX Model
|
||||
########################
|
||||
|
||||
This page provides instructions on how to convert a model from the ONNX format to the OpenVINO IR format using Model Optimizer. To use Model Optimizer, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.
|
||||
|
||||
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
|
||||
|
||||
To convert an ONNX model, run Model Optimizer with the path to the input model `.onnx` file:
|
||||
To convert an ONNX model, run Model Optimizer with the path to the input model ``.onnx`` file:
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.onnx
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the [Converting a Model to Intermediate Representation (IR)](@ref openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model) guide.
|
||||
mo --input_model <INPUT_MODEL>.onnx
|
||||
|
||||
## Supported ONNX Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
There are no ONNX specific parameters, so only framework-agnostic parameters are available to convert your model. For details, see the *General Conversion Parameters* section in the :doc:`Converting a Model to Intermediate Representation (IR) <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>` guide.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
|
||||
* [Convert ONNX Faster R-CNN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN)
|
||||
* [Convert ONNX GPT-2 Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2)
|
||||
* [Convert ONNX Mask R-CNN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN)
|
||||
Supported ONNX Layers
|
||||
#####################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific ONNX models. Here are some examples:
|
||||
|
||||
* :doc:`Convert ONNX Faster R-CNN Model <openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN>`
|
||||
* :doc:`Convert ONNX GPT-2 Model <openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2>`
|
||||
* :doc:`Convert ONNX Mask R-CNN Model <openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,23 +1,29 @@
|
||||
# Converting a PaddlePaddle Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
|
||||
|
||||
To convert a PaddlePaddle model, use the `mo` script and specify the path to the input `.pdmodel` model file:
|
||||
@sphinxdirective
|
||||
|
||||
To convert a PaddlePaddle model, use the ``mo`` script and specify the path to the input ``.pdmodel`` model file:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <INPUT_MODEL>.pdmodel
|
||||
|
||||
```sh
|
||||
mo --input_model <INPUT_MODEL>.pdmodel
|
||||
```
|
||||
**For example,** this command converts a yolo v3 PaddlePaddle network to OpenVINO IR network:
|
||||
|
||||
```sh
|
||||
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
## Supported PaddlePaddle Layers
|
||||
For the list of supported standard layers, refer to the [Supported Framework Layers](@ref openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers) page.
|
||||
mo --input_model=yolov3.pdmodel --input=image,im_shape,scale_factor --input_shape=[1,3,608,608],[1,2],[1,2] --reverse_input_channels --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1
|
||||
|
||||
Supported PaddlePaddle Layers
|
||||
#############################
|
||||
|
||||
For the list of supported standard layers, refer to the :doc:`Supported Framework Layers <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>` page.
|
||||
|
||||
Officially Supported PaddlePaddle Models
|
||||
########################################
|
||||
|
||||
## Officially Supported PaddlePaddle Models
|
||||
The following PaddlePaddle models have been officially validated and confirmed to work (as of OpenVINO 2022.1):
|
||||
|
||||
@sphinxdirective
|
||||
.. list-table::
|
||||
:widths: 20 25 55
|
||||
:header-rows: 1
|
||||
@@ -67,10 +73,16 @@ The following PaddlePaddle models have been officially validated and confirmed t
|
||||
* - BERT
|
||||
- language representation
|
||||
- Models are exported from `PaddleNLP <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`_. Refer to `README.md <https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/bert#readme>`_.
|
||||
|
||||
Frequently Asked Questions (FAQ)
|
||||
################################
|
||||
|
||||
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the :doc:`Model Optimizer FAQ <openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ>`, which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
When Model Optimizer is unable to run to completion due to typographical errors, incorrectly used options, or other issues, it provides explanatory messages. They describe the potential cause of the problem and give a link to the [Model Optimizer FAQ](@ref openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ), which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific PaddlePaddle models.
|
||||
|
||||
@@ -1,39 +1,49 @@
|
||||
# Converting a PyTorch Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
The PyTorch framework is supported through export to the ONNX format. In order to optimize and deploy a model that was trained with it:
|
||||
|
||||
1. [Export a PyTorch model to ONNX](#export-to-onnx).
|
||||
2. [Convert the ONNX model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation](@ref openvino_docs_MO_DG_IR_and_opsets) of the model based on the trained network topology, weights, and biases values.
|
||||
1. `Export a PyTorch model to ONNX <#exporting-a-pytorch-model-to-onnx-format>`__.
|
||||
2. :doc:`Convert the ONNX model <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>` to produce an optimized :doc:`Intermediate Representation <openvino_docs_MO_DG_IR_and_opsets>` of the model based on the trained network topology, weights, and biases values.
|
||||
|
||||
## Exporting a PyTorch Model to ONNX Format <a name="export-to-onnx"></a>
|
||||
PyTorch models are defined in Python. To export them, use the `torch.onnx.export()` method. The code to
|
||||
Exporting a PyTorch Model to ONNX Format
|
||||
########################################
|
||||
|
||||
PyTorch models are defined in Python. To export them, use the ``torch.onnx.export()`` method. The code to
|
||||
evaluate or test the model is usually provided with its code and can be used for its initialization and export.
|
||||
The export to ONNX is crucial for this process, but it is covered by PyTorch framework, therefore, It will not be covered here in detail.
|
||||
For more information, refer to the [Exporting PyTorch models to ONNX format](https://pytorch.org/docs/stable/onnx.html) guide.
|
||||
For more information, refer to the `Exporting PyTorch models to ONNX format <https://pytorch.org/docs/stable/onnx.html>`__ guide.
|
||||
|
||||
To export a PyTorch model, you need to obtain the model as an instance of `torch.nn.Module` class and call the `export` function.
|
||||
To export a PyTorch model, you need to obtain the model as an instance of ``torch.nn.Module`` class and call the ``export`` function.
|
||||
|
||||
```python
|
||||
import torch
|
||||
.. code-block:: py
|
||||
|
||||
# Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps.
|
||||
model = SomeModel()
|
||||
# Evaluate the model to switch some operations from training mode to inference.
|
||||
model.eval()
|
||||
# Create dummy input for the model. It will be used to run the model inside export function.
|
||||
dummy_input = torch.randn(1, 3, 224, 224)
|
||||
# Call the export function
|
||||
torch.onnx.export(model, (dummy_input, ), 'model.onnx')
|
||||
```
|
||||
import torch
|
||||
|
||||
## Known Issues
|
||||
# Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps.
|
||||
model = SomeModel()
|
||||
# Evaluate the model to switch some operations from training mode to inference.
|
||||
model.eval()
|
||||
# Create dummy input for the model. It will be used to run the model inside export function.
|
||||
dummy_input = torch.randn(1, 3, 224, 224)
|
||||
# Call the export function
|
||||
torch.onnx.export(model, (dummy_input, ), 'model.onnx')
|
||||
|
||||
* As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
|
||||
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version`
|
||||
option of the `torch.onnx.export`. For more information about ONNX opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md) page.
|
||||
|
||||
## Additional Resources
|
||||
See the [Model Conversion Tutorials](@ref openvino_docs_MO_DG_prepare_model_convert_model_tutorials) page for a set of tutorials providing step-by-step instructions for converting specific PyTorch models. Here are some examples:
|
||||
* [Convert PyTorch BERT-NER Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner)
|
||||
* [Convert PyTorch RCAN Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN)
|
||||
* [Convert PyTorch YOLACT Model](@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT)
|
||||
Known Issues
|
||||
####################
|
||||
|
||||
As of version 1.8.1, not all PyTorch operations can be exported to ONNX opset 9 which is used by default.
|
||||
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use ``opset_version`` option of the ``torch.onnx.export``. For more information about ONNX opset, refer to the `Operator Schemas <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`__ page.
|
||||
|
||||
Additional Resources
|
||||
####################
|
||||
|
||||
See the :doc:`Model Conversion Tutorials <openvino_docs_MO_DG_prepare_model_convert_model_tutorials>` page for a set of tutorials providing step-by-step instructions for converting specific PyTorch models. Here are some examples:
|
||||
|
||||
* :doc:`Convert PyTorch BERT-NER Model <openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_Bert_ner>`
|
||||
* :doc:`Convert PyTorch RCAN Model <openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RCAN>`
|
||||
* :doc:`Convert PyTorch YOLACT Model <openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user