diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
index c4721cdead0..7e29a7668b2 100644
--- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
+++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
@@ -161,7 +161,7 @@ Where `HEIGHT` and `WIDTH` are the input images height and width for which the m
* [GNMT](https://github.com/tensorflow/nmt) topology can be converted using [these instructions](tf_specific/Convert_GNMT_From_Tensorflow.md).
* [BERT](https://github.com/google-research/bert) topology can be converted using [these instructions](tf_specific/Convert_BERT_From_Tensorflow.md).
* [XLNet](https://github.com/zihangdai/xlnet) topology can be converted using [these instructions](tf_specific/Convert_XLNet_From_Tensorflow.md).
-
+* [Attention OCR](https://github.com/emedvedev/attention-ocr) topology can be converted using [these instructions](tf_specific/Convert_AttentionOCR_From_Tensorflow.md).
## Loading Non-Frozen Models to the Model Optimizer
diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md
new file mode 100644
index 00000000000..90e94677dd7
--- /dev/null
+++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md
@@ -0,0 +1,35 @@
+# Convert TensorFlow* Attention OCR Model to Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_AttentionOCR_From_Tensorflow}
+
+This tutorial explains how to convert the Attention OCR (AOCR) model from the [TensorFlow* Attention OCR repository](https://github.com/emedvedev/attention-ocr) to the Intermediate Representation (IR).
+
+## Extract Model from `aocr` Library
+
+The easiest way to get an AOCR model is to download `aocr` Python\* library:
+```
+pip install git+https://github.com/emedvedev/attention-ocr.git@master#egg=aocr
+```
+This library contains a pretrained model and allows to train and run AOCR using the command line. After installing `aocr`, you can extract the model:
+```
+aocr export --format=frozengraph model/path/
+```
+After this step you can find the model in model/path/ folder.
+
+## Convert the TensorFlow* AOCR Model to IR
+
+The original AOCR model contains data preprocessing which consists of the following steps:
+* Decoding input data to binary format where input data is an image represented as a string.
+* Resizing binary image to working resolution.
+
+After that, the resized image is sent to the convolution neural network (CNN). The Model Optimizer does not support image decoding so you should cut of preprocessing part of the model using '--input' command line parameter.
+```sh
+python3 path/to/model_optimizer/mo_tf.py \
+--input_model=model/path/frozen_graph.pb \
+--input="map/TensorArrayStack/TensorArrayGatherV3:0[1 32 86 1]" \
+--output "transpose_1,transpose_2" \
+--output_dir path/to/ir/
+```
+
+Where:
+* `map/TensorArrayStack/TensorArrayGatherV3:0[1 32 86 1]` - name of node producing tensor after preprocessing.
+* `transpose_1` - name of the node producing tensor with predicted characters.
+* `transpose_2` - name of the node producing tensor with predicted characters probabilties
\ No newline at end of file
diff --git a/docs/doxygen/ie_docs.xml b/docs/doxygen/ie_docs.xml
index bb006c9f01c..0b0c179cb7e 100644
--- a/docs/doxygen/ie_docs.xml
+++ b/docs/doxygen/ie_docs.xml
@@ -41,6 +41,7 @@ limitations under the License.
+