diff --git a/docs/ops/arithmetic/Add_1.md b/docs/ops/arithmetic/Add_1.md
index cd81141d1ea..a9464ffc447 100644
--- a/docs/ops/arithmetic/Add_1.md
+++ b/docs/ops/arithmetic/Add_1.md
@@ -4,7 +4,16 @@
**Category**: Arithmetic binary operation
-**Short description**: *Add* performs element-wise addition operation with two given tensors applying multi-directional broadcast rules.
+**Short description**: *Add* performs element-wise addition operation with two given tensors applying broadcasting rule specified in the *auto_broacast* attribute.
+
+**Detailed description**
+Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attribute is not `none`. Broadcasting is performed according to `auto_broadcast` value.
+
+After broadcasting *Add* performs addition operation for the input tensors *a* and *b* using the formula below:
+
+\f[
+o_{i} = a_{i} + b_{i}
+\f]
**Attributes**:
@@ -12,33 +21,26 @@
* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
- * *none* - no auto-broadcasting is allowed, all input shapes should match
- * *numpy* - numpy broadcasting rules, aligned with ONNX Broadcasting. Description is available in ONNX docs.
+ * *none* - no auto-broadcasting is allowed, all input shapes must match,
+ * *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
+ * *pdpd* - PaddlePaddle-style implicit broadcasting, description available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*
**Inputs**
-* **1**: A tensor of type T. Required.
-* **2**: A tensor of type T. Required.
+* **1**: A tensor of type T and arbitrary shape and rank. **Required.**
+* **2**: A tensor of type T and arbitrary shape and rank. **Required.**
**Outputs**
-* **1**: The result of element-wise addition operation. A tensor of type T.
+* **1**: The result of element-wise addition operation. A tensor of type T with shape equal to broadcasted shape of the two inputs.
**Types**
* *T*: any numeric type.
-**Detailed description**
-Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.
-
-After broadcasting *Add* does the following with the input tensors *a* and *b*:
-
-\f[
-o_{i} = a_{i} + b_{i}
-\f]
**Examples**
@@ -46,6 +48,7 @@ o_{i} = a_{i} + b_{i}
```xml
+
256
@@ -68,6 +71,7 @@ o_{i} = a_{i} + b_{i}
*Example 2: broadcast*
```xml
+
8
diff --git a/docs/ops/broadcast_rules.md b/docs/ops/broadcast_rules.md
new file mode 100644
index 00000000000..3490fae3713
--- /dev/null
+++ b/docs/ops/broadcast_rules.md
@@ -0,0 +1,139 @@
+# Broadcast Rules For Elementwise Operations {#openvino_docs_ops_broadcast_rules}
+
+The purpose of this document is to provide a set of common rules which are applicable for ops using broadcasting.
+
+## Description
+
+Broadcast allows to perform element-wise operation for inputs of arbitrary number of dimensions. There are 2 types of broadcasts supported: Numpy and PDPD.
+
+## Rules
+
+**None broadcast**:
+1. Input tensors dimensions must match.
+2. No implicit broadcast rule is applied.
+
+**Numpy broadcast**:
+1. Right aligned dimensions of the two tensors are compared elementwise.
+2. Smaller tensor is prepended with dimension(s) of size 1 in order to have the same shape as the larger tensor.
+3. After alignment two tensors are compatible when both are equal or one of the dimensions is 1.
+4. Tensor with dimension of size 1 will be implicitly broadcasted to match the size of the second tensor.
+5. When both inputs are of rank = 0 the result is a scalar.
+
+**PDPD broadcast**:
+1. First input tensor A is of any rank, second input B has rank smaller or equal to the first input.
+2. Input tensor B is a continuous subsequence of input A.
+3. Apply broadcast B to match the shape of A, where provided *axis* is the start dimension index
+ for broadcasting B onto A.
+4. If *axis* is set to default (-1) calculate new value: `axis = rank(A) - rank(B)`.
+5. The trailing dimensions of size 1 for input B will be ignored for the consideration of
+ subsequence, such as `shape(B) = (3, 1) => (3)`.
+
+## Numpy examples
+
+* `A: Shape(,) -> scalar`
+ `B: Shape(,) -> scalar`
+ `Result: Shape(,) -> scalar`
+
+* `A: Shape(2, 3)`
+ `B: Shape( 1)`
+ `Result: Shape(2, 3)`
+
+* `A: Shape( 3)`
+ `B: Shape(2, 3)`
+ `Result: Shape(2, 3)`
+
+* `A: Shape(2, 3, 5)`
+ `B: Shape(,) -> scalar`
+ `Result: Shape(2, 3, 5)`
+
+* `A: Shape(2, 1, 5)`
+ `B: Shape(1, 4, 5)`
+ `Result: Shape(2, 4, 5)`
+
+* `A: Shape( 6, 5)`
+ `B: Shape(2, 1, 5)`
+ `Result: Shape(2, 6, 5)`
+
+* `A: Shape(2, 1, 5)`
+ `B: Shape( 4, 1)`
+ `Result: Shape(2, 4, 5)`
+
+* `A: Shape(3, 2, 1, 4)`
+ `B: Shape( 5, 4)`
+ `Result: Shape(3, 2, 5, 4)`
+
+* `A: Shape( 1, 5, 3)`
+ `B: Shape(5, 2, 1, 3)`
+ `Result: Shape(5, 2, 5, 3)`
+
+* `A: Shape(3)`
+ `B: Shape(2)`
+ `Result: broadcast won't happen due to dimensions mismatch`
+
+* `A: Shape(3, 1, 5)`
+ `B: Shape(4, 4, 5)`
+ `Result: broadcast won't happen due to dimensions mismatch on the leftmost axis`
+
+## PDPD examples
+
+* `A: Shape(2, 3, 4, 5)`
+ `B: Shape( 3, 4 ) with axis = 1`
+ `Result: Shape(2, 3, 4, 5)`
+
+* `A: Shape(2, 3, 4, 5)`
+ `B: Shape( 3, 1 ) with axis = 1`
+ `Result: Shape(2, 3, 4, 5)`
+
+* `A: Shape(2, 3, 4, 5)`
+ `B: Shape( 4, 5) with axis=-1(default) or axis=2`
+ `Result: Shape(2, 3, 4, 5)`
+
+* `A: Shape(2, 3, 4, 5)`
+ `B: Shape(1, 3 ) with axis = 0`
+ `Result: Shape(2, 3, 4, 5)`
+
+* `A: Shape(2, 3, 4, 5)`
+ `B: Shape(,)`
+ `Result: Shape(2, 3, 4, 5)`
+
+* `A: Shape(2, 3, 4, 5)`
+ `B: Shape(5,)`
+ `Result: Shape(2, 3, 4, 5)`
+
+# Bidirectional Broadcast Rules {#openvino_docs_ops_bidirectional_broadcast_rules}
+
+## Description
+
+Bidirectional Broadcast is not intended for element-wise operations. Its purpose is to broadcast an array to a given shape.
+
+## Rules
+
+**Bidirectional broadcast**:
+1. Dimensions of the input tensors are right alignment.
+2. Following broadcast rule is applied: `numpy.array(input) * numpy.ones(target_shape)`.
+3. Two corresponding dimension must have the same value, or one of them is equal to 1.
+4. Output shape may not be equal to `target_shape` if:
+ * `target_shape` contains dimensions of size 1,
+ * `target_shape` rank is smaller than the rank of input tensor.
+
+## Bidirectional examples
+
+* `A: Shape(5)`
+ `B: Shape(1)`
+ `Result: Shape(5)`
+
+* `A: Shape(2, 3)`
+ `B: Shape( 3)`
+ `Result: Shape(2, 3)`
+
+* `A: Shape(3, 1)`
+ `B: Shape(3, 4)`
+ `Result: Shape(3, 4)`
+
+* `A: Shape(3, 4)`
+ `B: Shape(,) -> scalar`
+ `Result: Shape(3, 4)`
+
+* `A: Shape( 3, 1)`
+ `B: Shape(2, 1, 6)`
+ `Result: Shape(2, 3, 6)`
diff --git a/docs/ops/movement/Broadcast_1.md b/docs/ops/movement/Broadcast_1.md
index a9b37505ce8..00dfbfc618e 100644
--- a/docs/ops/movement/Broadcast_1.md
+++ b/docs/ops/movement/Broadcast_1.md
@@ -10,7 +10,7 @@
*Broadcast* takes the first tensor `data` and, following broadcasting rules that are specified by `mode` attribute and the 3rd input `axes_mapping`, builds a new tensor with shape matching the 2nd input tensor `target_shape`. `target_shape` input is a 1D integer tensor that represents required shape of the output.
-Attribute `mode` and the 3rd input `axes_mapping` are relevant for cases when rank of the input `data` tensor doesn't match the size of the `target_shape` input. They both define how axes from `data` shape are mapped to the output axes. If `mode` is set to `numpy`, it means that the standard one-directional numpy broadcasting rules are applied. They are similar to rules that applied in all binary element-wise operations in case when `auto_broadcasting` attribute is set to `numpy`, and are similar to rules described at [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html#general-broadcasting-rules), when only one-directional broadcasting is applied: input tensor `data` is broadcasted to `target_shape` but not vice-versa.
+Attribute `mode` and the 3rd input `axes_mapping` are relevant for cases when rank of the input `data` tensor doesn't match the size of the `target_shape` input. They both define how axes from `data` shape are mapped to the output axes. If `mode` is set to `numpy`, it means that the standard one-directional numpy broadcasting rules are applied. These rules are described in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md), when only one-directional broadcasting is applied: input tensor `data` is broadcasted to `target_shape` but not vice-versa.
In case if `mode` is set to `explicit`, then 3rd input `axes_mapping` comes to play. It contains a list of axis indices, each index maps an axis from the 1st input tensor `data` to axis in the output. The size of `axis_mapping` should match the rank of input `data` tensor, so all axes from `data` tensor should be mapped to axes of the output.
diff --git a/docs/ops/movement/Broadcast_3.md b/docs/ops/movement/Broadcast_3.md
index 460f613dbd7..40b31fffad2 100644
--- a/docs/ops/movement/Broadcast_3.md
+++ b/docs/ops/movement/Broadcast_3.md
@@ -10,9 +10,9 @@
*Broadcast* takes the first tensor `data` and, following broadcasting rules that are specified by `mode` attribute and the 3rd input `axes_mapping`, builds a new tensor with shape matching the 2nd input tensor `target_shape`. `target_shape` input is a 1D integer tensor that represents required shape of the output.
-Attribute `mode` and the 3rd input `axes_mapping` are relevant for cases when rank of the input `data` tensor doesn't match the size of the `target_shape` input. They both define how axes from `data` shape are mapped to the output axes. If `mode` is set to `numpy`, it means that the standard one-directional numpy broadcasting rules are applied. They are similar to rules that applied in all binary element-wise operations in case when `auto_broadcasting` attribute is set to `numpy`, and are similar to rules described at [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html#general-broadcasting-rules), when only one-directional broadcasting is applied: input tensor `data` is broadcasted to `target_shape` but not vice-versa.
+Attribute `mode` and the 3rd input `axes_mapping` are relevant for cases when rank of the input `data` tensor doesn't match the size of the `target_shape` input. They both define how axes from `data` shape are mapped to the output axes. If `mode` is set to `numpy`, it means that the standard one-directional numpy broadcasting rules are applied. These rules are described in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md), when only one-directional broadcasting is applied: input tensor `data` is broadcasted to `target_shape` but not vice-versa.
-In case if `mode` is set to `bidirectional`, then the broadcast rule is similar to `numpy.array(input) * numpy.ones(target_shape)`. Dimensions are right alignment. Two corresponding dimension must have the same value, or one of them is equal to 1. If this attribute value is used, then the 3rd input for the operation shouldn't be provided. The behaviour of such kind of broadcasting is equivalent to ONNX operation [Expand](https://github.com/onnx/onnx/blob/rel-1.7.0/docs/Operators.md#Expand).
+In case if `mode` is set to `bidirectional`, then the broadcast rule is similar to `numpy.array(input) * numpy.ones(target_shape)`. Dimensions are right alignment. Two corresponding dimension must have the same value, or one of them is equal to 1. If this attribute value is used, then the 3rd input for the operation shouldn't be provided. The behaviour is described in [Bidirectional Broadcast Rules](../broadcast_rules.md).
In case if `mode` is set to `explicit`, then 3rd input `axes_mapping` comes to play. It contains a list of axis indices, each index maps an axis from the 1st input tensor `data` to axis in the output. The size of `axis_mapping` should match the rank of input `data` tensor, so all axes from `data` tensor should be mapped to axes of the output.