[DOCS] make non-code text represented properly (#2717)

* docs: make non-code text represented properly

* docs: remove typo 'g'

* Fixing broken numeration

* Fixing broken numeration

* Fixing broken numeration

Co-authored-by: Alina Alborova <alina.alborova@intel.com>
This commit is contained in:
Andrei Kochin
2020-11-27 11:51:35 +03:00
committed by GitHub
parent b1e99ba699
commit bfb14af992
2 changed files with 15 additions and 17 deletions

View File

@@ -99,7 +99,7 @@ If you want to cut your model at the end, you have the following options:
```sh
python3 mo.py --input_model=inception_v1.pb -b 1 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu
```
The resulting Intermediate Representation has three layers:
The resulting Intermediate Representation has three layers:
```xml
<?xml version="1.0" ?>
<net batch="1" name="model" version="2">
@@ -137,13 +137,13 @@ python3 mo.py --input_model=inception_v1.pb -b 1 --output=InceptionV1/InceptionV
</edges>
</net>
```
As you can see in the TensorBoard picture, the original model has more nodes than Intermediate Representation. Model Optimizer has fused batch normalization `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm` to the convolution `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`, and it is not present in the final Intermediate Representation. This is not an effect of the `--output` option, it is usual behavior of the Model Optimizer for batch normalizations and convolutions. The effect of the `--output` is that the `ReLU` layer becomes the last one in the converted model.
As you can see in the TensorBoard picture, the original model has more nodes than Intermediate Representation. Model Optimizer has fused batch normalization `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm` to the convolution `InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution`, and it is not present in the final Intermediate Representation. This is not an effect of the `--output` option, it is usual behavior of the Model Optimizer for batch normalizations and convolutions. The effect of the `--output` is that the `ReLU` layer becomes the last one in the converted model.
2. The following command cuts the edge that comes from 0 output port of the `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` and the rest of the model, making this node the last one in the model:
```sh
python3 mo.py --input_model=inception_v1.pb -b 1 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu:0
```
The resulting Intermediate Representation has three layers, which are the same as in the previous case:
The resulting Intermediate Representation has three layers, which are the same as in the previous case:
```xml
<?xml version="1.0" ?>
<net batch="1" name="model" version="2">
@@ -181,13 +181,13 @@ python3 mo.py --input_model=inception_v1.pb -b 1 --output=InceptionV1/InceptionV
</edges>
</net>
```
This type of cutting is useful to cut edges in case of multiple output edges.
This type of cutting is useful to cut edges in case of multiple output edges.
3. The following command cuts the edge that comes to 0 input port of the `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` and the rest of the model including `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu`, deleting this node and making the previous node `InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D` the last in the model:
```sh
python3 mo.py --input_model=inception_v1.pb -b 1 --output=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu
```
The resulting Intermediate Representation has two layers, which are the same as the first two layers in the previous case:
The resulting Intermediate Representation has two layers, which are the same as the first two layers in the previous case:
```xml
<?xml version="1.0" ?>
<net batch="1" name="inception_v1" version="2">
@@ -225,7 +225,7 @@ If you want to go further and cut the beginning of the model, leaving only the `
```sh
python3 mo.py --input_model=inception_v1.pb -b 1 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --input=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu
```
The resulting Intermediate Representation looks as follows:
The resulting Intermediate Representation looks as follows:
```xml
<xml version="1.0">
<net batch="1" name="model" version="2">
@@ -249,16 +249,15 @@ python3 mo.py --input_model=inception_v1.pb -b 1 --output=InceptionV1/InceptionV
</edges>
</net>
```
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. So the Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow* model to the point at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. So the Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.<br>
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow* model to the point at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
2. You can cut edge incoming to layer by port number. To specify incoming port use notation `--input=port:input_node`.
So, to cut everything before `ReLU` layer, cut edge incoming in port 0 of `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` node:
```sh
python3 mo.py --input_model=inception_v1.pb -b 1 --input=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu
```
The resulting Intermediate Representation looks as follows:
The resulting Intermediate Representation looks as follows:
```xml
<xml version="1.0">
<net batch="1" name="model" version="2">
@@ -282,16 +281,15 @@ python3 mo.py --input_model=inception_v1.pb -b 1 --input=0:InceptionV1/Inception
</edges>
</net>
```
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. So the Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow* model to the point at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
`Input` layer is automatically created to feed the layer that is converted from the node specified in `--input`, which is `InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu` in this case. Model Optimizer does not replace the `ReLU` node by the `Input` layer, it produces such Intermediate Representation to make the node be the first executable node in the final Intermediate Representation. So the Model Optimizer creates enough `Inputs` to feed all input ports of the node that is passed in `--input`.<br>
Even though `--input_shape` is not specified in the command line, the shapes for layers are inferred from the beginning of the original TensorFlow* model to the point at which the new input is defined. It has the same shape [1,64,112,112] as the model converted as a whole or without cutting off the beginning.
3. You can cut edge outcoming from layer by port number. To specify outcoming port use notation `--input=input_node:port`.
So, to cut everything before `ReLU` layer, cut edge from `InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1` node to `ReLU`:
```sh
python3 mo.py --input_model=inception_v1.pb -b 1 --input=InceptionV1/InceptionV1/Conv2d_1a_7x7/BatchNorm/batchnorm/add_1:0 --output=InceptionV1/InceptionV1/Conv2d_1a_7x7/Relu
```
The resulting Intermediate Representation looks as follows:
The resulting Intermediate Representation looks as follows:
```xml
<xml version="1.0">
<net batch="1" name="model" version="2">
@@ -389,4 +387,4 @@ In this case, when `--input_shape` is specified and the node contains multiple i
The correct command line is:
```sh
python3 mo.py --input_model=inception_v1.pb --input=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3]
```
```

View File

@@ -134,7 +134,7 @@ Output <Node> output = node->output(0);
auto pshape = data.get_partial_shape();
auto el_type = data.get_element_type();
// Ggetting parent for input port
// Getting parent for input port
Output <Node> parent_output;
parent_output = data.get_source_output();