diff --git a/docs/Extensibility_UG/Intro.md b/docs/Extensibility_UG/Intro.md index c1721635fce..ebb5e603319 100644 --- a/docs/Extensibility_UG/Intro.md +++ b/docs/Extensibility_UG/Intro.md @@ -9,7 +9,7 @@ openvino_docs_Extensibility_UG_add_openvino_ops openvino_docs_Extensibility_UG_Frontend_Extensions openvino_docs_Extensibility_UG_GPU - openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel + openvino_docs_Extensibility_UG_VPU_Kernel openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer @endsphinxdirective diff --git a/docs/Extensibility_UG/VPU_Extensibility.md b/docs/Extensibility_UG/VPU_Extensibility.md index 3b45e150140..9d483abc4e1 100644 --- a/docs/Extensibility_UG/VPU_Extensibility.md +++ b/docs/Extensibility_UG/VPU_Extensibility.md @@ -1,4 +1,4 @@ -# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_IE_DG_Extensibility_DG_VPU_Kernel} +# How to Implement Custom Layers for VPU (Intel® Neural Compute Stick 2) {#openvino_docs_Extensibility_UG_VPU_Kernel} To enable operations not supported by OpenVINO™ out of the box, you need a custom extension for Model Optimizer, a custom nGraph operation set, and a custom kernel for the device you will target. This page describes custom kernel support for one the VPU, the Intel® Neural Compute Stick 2 device, which uses the MYRIAD device plugin.