diff --git a/docs/doxygen/openvino_docs.xml b/docs/doxygen/openvino_docs.xml
index 3cf8656c8ea..20f383d2bf2 100644
--- a/docs/doxygen/openvino_docs.xml
+++ b/docs/doxygen/openvino_docs.xml
@@ -39,6 +39,7 @@
+
diff --git a/docs/get_started/get_started_raspbian.md b/docs/get_started/get_started_raspbian.md
new file mode 100644
index 00000000000..4dad1b790a6
--- /dev/null
+++ b/docs/get_started/get_started_raspbian.md
@@ -0,0 +1,109 @@
+# Get Started with OpenVINO™ Toolkit on Raspbian* OS {#openvino_docs_get_started_get_started_raspbian}
+
+The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit you installed on Raspbian* OS.
+
+In this guide, you will:
+* Learn the OpenVINO™ inference workflow.
+* Build and run sample code using detailed instructions.
+
+## OpenVINO™ Toolkit Components
+On Raspbian* OS, the OpenVINO™ toolkit consists of the following components:
+* **Inference Engine:** The software libraries that run inference against the Intermediate Representation (optimized model) to produce inference results.
+* **MYRIAD Plugin:** The plugin developed for inference of neural networks on Intel® Neural Compute Stick 2.
+
+> **NOTE**:
+> * The OpenVINO™ package for Raspberry* does not include the [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To convert models to Intermediate Representation (IR), you need to install it separately to your host machine.
+> * The package does not include the Open Model Zoo demo applications. You can download them separately from the [Open Models Zoo repository](https://github.com/opencv/open_model_zoo).
+
+In addition, [code samples](../IE_DG/Samples_Overview.md) are provided to help you get up and running with the toolkit.
+
+## Intel® Distribution of OpenVINO™ Toolkit Directory Structure
+This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see [Install Intel® Distribution of OpenVINO™ toolkit for Raspbian*](../install_guides/installing-openvino-raspbian.md).
+
+The OpenVINO toolkit for Raspbian* OS is distributed without installer. This document refers to the directory to which you unpacked the toolkit package as ``.
+
+The primary tools for deploying your models and applications are installed to the `/deployment_tools` directory.
+
+ Click for the deployment_tools directory structure
+
+
+| Directory | Description |
+|:----------------------------------------|:--------------------------------------------------------------------------------------|
+| `inference_engine/` | Inference Engine directory. Contains Inference Engine API binaries and source files, samples and extensions source files, and resources like hardware drivers.|
+| `external/` | Third-party dependencies and drivers.|
+| `include/` | Inference Engine header files. For API documentation, see the [Inference Engine API Reference](./annotated.html). |
+| `lib/` | Inference Engine libraries.|
+| `samples/` | Inference Engine samples. Contains source code for C++ and Python* samples and build scripts. See the [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md). |
+| `share/` | CMake configuration files for linking with Inference Engine.|
+
+
+
+## OpenVINO™ Workflow Overview
+
+The OpenVINO™ workflow on Raspbian* OS is as follows:
+1. **Get a pre-trained model** for your inference task. If you want to use your model for inference, the model must be converted to the `.bin` and `.xml` Intermediate Representation (IR) files, which are used as input by Inference Engine. On Raspberry PI, OpenVINO™ toolkit includes only the Inference Engine module. The Model Optimizer is not supported on this platform. To get the optimized models you can use one of the following options:
+
+ * Download public and Intel's pre-trained models from the [Open Model Zoo](https://github.com/opencv/open_model_zoo) using [Model Downloader tool](@ref omz_tools_downloader_README#model_downloader_usage).
+
For more information on pre-trained models, see [Pre-Trained Models Documentation](@ref omz_models_intel_index)
+
+ * Convert a model using the Model Optimizer from a full installation of Intel® Distribution of OpenVINO™ toolkit on one of the supported platforms. Installation instructions are available:
+ * [Installation Guide for macOS*](../install_guides/installing-openvino-macos.md)
+ * [Installation Guide for Windows*](../install_guides/installing-openvino-windows.md)
+ * [Installation Guide for Linux*](../install_guides/installing-openvino-linux.md)
+2. **Use the Inference Engine API in the application** to run inference against the Intermediate Representation (optimized model) and output inference results. The application can be an OpenVINO™ sample or your own application.
+
+## Build and Run Code Samples
+
+Follow the steps below to run pre-trained Face Detection network using Inference Engine samples from the OpenVINO toolkit.
+
+1. Create a samples build directory. This example uses a directory named `build`:
+```sh
+mkdir build && cd build
+```
+2. Build the Object Detection Sample with the following command:
+```sh
+cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp
+```
+```sh
+make -j2 object_detection_sample_ssd
+```
+3. Download the pre-trained Face Detection model with the Model Downloader:
+
+ ```sh
+ git clone --depth 1 https://github.com/openvinotoolkit/open_model_zoo
+ cd open_model_zoo/tools/downloader
+ python3 -m pip install -r requirements.in
+ python3 downloader.py --name face-detection-adas-0001
+ ```
+
+4. Run the sample, specifying the model and path to the input image:
+```sh
+./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i
+```
+The application outputs an image (`out_0.bmp`) with detected faced enclosed in rectangles.
+
+## Basic Guidelines for Using Code Samples
+
+Following are some basic guidelines for executing the OpenVINO™ workflow using the code samples:
+
+1. Before using the OpenVINO™ samples, always set up the environment:
+```sh
+source /bin/setupvars.sh
+```
+2. Have the directory path for the following:
+- Code Sample binaries
+- Media: Video or image. Many sources are available from which you can download video media to use the code samples and demo applications, like https://videos.pexels.com and https://images.google.com.
+- Model in the IR format (.bin and .xml files).
+
+
+## Additional Resources
+
+Use these resources to learn more about the OpenVINO™ toolkit:
+
+* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
+* [OpenVINO™ Toolkit Overview](../index.md)
+* [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)
+* [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
+* [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md)
+* [Overview of OpenVINO™ Toolkit Pre-Trained Models](https://software.intel.com/en-us/openvino-toolkit/documentation/pretrained-models)
+* [OpenVINO™ Hello World Face Detection Exercise](https://github.com/intel-iot-devkit/inference-tutorials-generic)