Compare commits
7 Commits
2021.3-doc
...
releases/2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2eb6fbcca1 | ||
|
|
1bf8a41ff6 | ||
|
|
c5f7ad383e | ||
|
|
cccff7fe0d | ||
|
|
b33800a61c | ||
|
|
320887b424 | ||
|
|
5f2e584231 |
@@ -93,7 +93,11 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
|
||||
* [Converting Your ONNX* Model](prepare_model/convert_model/Convert_Model_From_ONNX.md)
|
||||
* [Converting Faster-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md)
|
||||
* [Converting Mask-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md)
|
||||
* [Converting DLRM ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_DLRM.md)
|
||||
* [Converting GPT2 ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_GPT2.md)
|
||||
* [Converting Your PyTorch* Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md)
|
||||
* [Converting F3Net PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_F3Net.md)
|
||||
* [Converting QuartzNet PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md)
|
||||
* [Converting YOLACT PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md)
|
||||
* [Model Optimizations Techniques](prepare_model/Model_Optimization_Techniques.md)
|
||||
* [Cutting parts of the model](prepare_model/convert_model/Cutting_Model.md)
|
||||
* [Sub-graph Replacement in Model Optimizer](prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
|
||||
|
||||
@@ -27,17 +27,6 @@
|
||||
|
||||
Listed models are built with the operation set version 8 except the GPT-2 model. Models that are upgraded to higher operation set versions may not be supported.
|
||||
|
||||
## Supported Pytorch* Models via ONNX Conversion
|
||||
Starting from the 2019R4 release, the OpenVINO™ toolkit officially supports public Pytorch* models (from `torchvision` 0.2.1 and `pretrainedmodels` 0.7.4 packages) via ONNX conversion.
|
||||
The list of supported topologies is presented below:
|
||||
|
||||
|Package Name|Supported Models|
|
||||
|:----|:----|
|
||||
| [Torchvision Models](https://pytorch.org/docs/stable/torchvision/index.html) | alexnet, densenet121, densenet161, densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19 |
|
||||
| [Pretrained Models](https://github.com/Cadene/pretrained-models.pytorch) | alexnet, fbresnet152, resnet101, resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11 |
|
||||
| [ESPNet Models](https://github.com/sacmehta/ESPNet/tree/master/pretrained) | |
|
||||
| [MobileNetV3](https://github.com/d-li14/mobilenetv3.pytorch) | |
|
||||
|
||||
## Supported PaddlePaddle* Models via ONNX Conversion
|
||||
Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion.
|
||||
The list of supported topologies downloadable from PaddleHub is presented below:
|
||||
|
||||
@@ -0,0 +1,53 @@
|
||||
# Converting a PyTorch* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch}
|
||||
|
||||
PyTorch* framework is supported through export to ONNX\* format. A summary of the steps for optimizing and deploying a model that was trained with the PyTorch\* framework:
|
||||
|
||||
1. [Export PyTorch model to ONNX\*](#export-to-onnx).
|
||||
2. [Configure the Model Optimizer](../Config_Model_Optimizer.md) for ONNX\*.
|
||||
3. [Convert an ONNX\* model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
|
||||
4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../IE_DG/Samples_Overview.md).
|
||||
5. [Integrate](../../../IE_DG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment.
|
||||
|
||||
## Supported Topologies
|
||||
|
||||
Here is the list of models that were tested and are guaranteed to be supported.
|
||||
It is not a full list of models that can be converted to ONNX\* and to IR.
|
||||
|
||||
|Package Name|Supported Models|
|
||||
|:----|:----|
|
||||
| [Torchvision Models](https://pytorch.org/docs/stable/torchvision/index.html) | alexnet, densenet121, densenet161, densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19 |
|
||||
| [Pretrained Models](https://github.com/Cadene/pretrained-models.pytorch) | alexnet, fbresnet152, resnet101, resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11 |
|
||||
|
||||
**Other supported topologies**
|
||||
|
||||
* [ESPNet Models](https://github.com/sacmehta/ESPNet/tree/master/pretrained)
|
||||
* [MobileNetV3](https://github.com/d-li14/mobilenetv3.pytorch)
|
||||
* F3Net topology can be converted using [Convert PyTorch\* F3Net to the IR](pytorch_specific/Convert_F3Net.md) instruction.
|
||||
* QuartzNet topologies from [NeMo project](https://github.com/NVIDIA/NeMo) can be converted using [Convert PyTorch\* QuartzNet to the IR](pytorch_specific/Convert_QuartzNet.md) instruction.
|
||||
* YOLACT topology can be converted using [Convert PyTorch\* YOLACT to the IR](pytorch_specific/Convert_YOLACT.md) instruction.
|
||||
|
||||
## Export PyTorch\* Model to ONNX\* Format <a name="export-to-onnx"></a>
|
||||
|
||||
PyTorch models are defined in a Python\* code, to export such models use `torch.onnx.export()` method.
|
||||
Only the basics will be covered here, the step to export to ONNX\* is crucial but it is covered by PyTorch\* framework.
|
||||
For more information, please refer to [PyTorch\* documentation](https://pytorch.org/docs/stable/onnx.html).
|
||||
|
||||
To export a PyTorch\* model you need to obtain the model as an instance of `torch.nn.Module` class and call the `export` function.
|
||||
```python
|
||||
import torch
|
||||
|
||||
# Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps.
|
||||
model = SomeModel()
|
||||
# Evaluate the model to switch some operations from training mode to inference.
|
||||
model.eval()
|
||||
# Create dummy input for the model. It will be used to run the model inside export function.
|
||||
dummy_input = torch.randn(1, 3, 224, 224)
|
||||
# Call the export function
|
||||
torch.onnx.export(model, (dummy_input, ), 'model.onnx')
|
||||
```
|
||||
|
||||
## Known Issues
|
||||
|
||||
* Not all PyTorch\* operations can be exported to ONNX\* opset 9 which is used by default, as of version 1.8.1.
|
||||
It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version`
|
||||
option of the `torch.onnx.export`. For more information about ONNX* opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md).
|
||||
@@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* F3Net to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_F3Net}
|
||||
# Convert PyTorch* F3Net to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net}
|
||||
|
||||
[F3Net](https://github.com/weijun88/F3Net): Fusion, Feedback and Focus for Salient Object Detection
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* QuartzNet to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_QuartzNet}
|
||||
# Convert PyTorch* QuartzNet to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet}
|
||||
|
||||
[NeMo project](https://github.com/NVIDIA/NeMo) provides the QuartzNet model.
|
||||
|
||||
@@ -0,0 +1,107 @@
|
||||
# Convert PyTorch\* RNN-T Model to the Intermediate Representation (IR) {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT}
|
||||
|
||||
This instruction covers conversion of RNN-T model from [MLCommons](https://github.com/mlcommons) repository. Follow
|
||||
the steps below to export a PyTorch* model into ONNX* before converting it to IR:
|
||||
|
||||
**Step 1**. Clone RNN-T PyTorch implementation from MLCommons repository (revision r1.0). Make a shallow clone to pull
|
||||
only RNN-T model without full repository. If you already have a full repository, skip this and go to **Step 2**:
|
||||
```bash
|
||||
git clone -b r1.0 -n https://github.com/mlcommons/inference rnnt_for_openvino --depth 1
|
||||
cd rnnt_for_openvino
|
||||
git checkout HEAD speech_recognition/rnnt
|
||||
```
|
||||
|
||||
**Step 2**. If you already have a full clone of MLCommons inference repository, create a folder for
|
||||
pretrained PyTorch model, where conversion into IR will take place. You will also need to specify the path to
|
||||
your full clone at **Step 5**. Skip this step if you have a shallow clone.
|
||||
|
||||
```bash
|
||||
mkdir rnnt_for_openvino
|
||||
cd rnnt_for_openvino
|
||||
```
|
||||
|
||||
**Step 3**. Download pretrained weights for PyTorch implementation from https://zenodo.org/record/3662521#.YG21DugzZaQ.
|
||||
For UNIX*-like systems you can use wget:
|
||||
```bash
|
||||
wget https://zenodo.org/record/3662521/files/DistributedDataParallel_1576581068.9962234-epoch-100.pt
|
||||
```
|
||||
The link was taken from `setup.sh` in the `speech_recoginitin/rnnt` subfolder. You will get exactly the same weights as
|
||||
if you were following the steps from https://github.com/mlcommons/inference/tree/master/speech_recognition/rnnt.
|
||||
|
||||
**Step 4**. Install required python* packages:
|
||||
```bash
|
||||
pip3 install torch toml
|
||||
```
|
||||
|
||||
**Step 5**. Export RNN-T model into ONNX with the script below. Copy the code below into a file named
|
||||
`export_rnnt_to_onnx.py` and run it in the current directory `rnnt_for_openvino`:
|
||||
|
||||
> **NOTE**: If you already have a full clone of MLCommons inference repository, you need to
|
||||
> specify `mlcommons_inference_path` variable.
|
||||
|
||||
```python
|
||||
import toml
|
||||
import torch
|
||||
import sys
|
||||
|
||||
|
||||
def load_and_migrate_checkpoint(ckpt_path):
|
||||
checkpoint = torch.load(ckpt_path, map_location="cpu")
|
||||
migrated_state_dict = {}
|
||||
for key, value in checkpoint['state_dict'].items():
|
||||
key = key.replace("joint_net", "joint.net")
|
||||
migrated_state_dict[key] = value
|
||||
del migrated_state_dict["audio_preprocessor.featurizer.fb"]
|
||||
del migrated_state_dict["audio_preprocessor.featurizer.window"]
|
||||
return migrated_state_dict
|
||||
|
||||
|
||||
mlcommons_inference_path = './' # specify relative path for MLCommons inferene
|
||||
checkpoint_path = 'DistributedDataParallel_1576581068.9962234-epoch-100.pt'
|
||||
config_toml = 'speech_recognition/rnnt/pytorch/configs/rnnt.toml'
|
||||
config = toml.load(config_toml)
|
||||
rnnt_vocab = config['labels']['labels']
|
||||
sys.path.insert(0, mlcommons_inference_path + 'speech_recognition/rnnt/pytorch')
|
||||
|
||||
from model_separable_rnnt import RNNT
|
||||
|
||||
model = RNNT(config['rnnt'], len(rnnt_vocab) + 1, feature_config=config['input_eval'])
|
||||
model.load_state_dict(load_and_migrate_checkpoint(checkpoint_path))
|
||||
|
||||
seq_length, batch_size, feature_length = 157, 1, 240
|
||||
inp = torch.randn([seq_length, batch_size, feature_length])
|
||||
feature_length = torch.LongTensor([seq_length])
|
||||
x_padded, x_lens = model.encoder(inp, feature_length)
|
||||
torch.onnx.export(model.encoder, (inp, feature_length), "rnnt_encoder.onnx", opset_version=12,
|
||||
input_names=['input.1', '1'], dynamic_axes={'input.1': {0: 'seq_len', 1: 'batch'}})
|
||||
|
||||
symbol = torch.LongTensor([[20]])
|
||||
hidden = torch.randn([2, batch_size, 320]), torch.randn([2, batch_size, 320])
|
||||
g, hidden = model.prediction.forward(symbol, hidden)
|
||||
torch.onnx.export(model.prediction, (symbol, hidden), "rnnt_prediction.onnx", opset_version=12,
|
||||
input_names=['input.1', '1', '2'],
|
||||
dynamic_axes={'input.1': {0: 'batch'}, '1': {1: 'batch'}, '2': {1: 'batch'}})
|
||||
|
||||
f = torch.randn([batch_size, 1, 1024])
|
||||
model.joint.forward(f, g)
|
||||
torch.onnx.export(model.joint, (f, g), "rnnt_joint.onnx", opset_version=12,
|
||||
input_names=['0', '1'], dynamic_axes={'0': {0: 'batch'}, '1': {0: 'batch'}})
|
||||
```
|
||||
|
||||
```bash
|
||||
python3 export_rnnt_to_onnx.py
|
||||
```
|
||||
|
||||
After completing this step, the files rnnt_encoder.onnx, rnnt_prediction.onnx, and rnnt_joint.onnx will be saved in
|
||||
the current directory.
|
||||
|
||||
**Step 6**. Run the conversion command:
|
||||
|
||||
```bash
|
||||
python3 {path_to_openvino}/mo.py --input_model rnnt_encoder.onnx --input "input.1[157 1 240],1->157"
|
||||
python3 {path_to_openvino}/mo.py --input_model rnnt_prediction.onnx --input "input.1[1 1],1[2 1 320],2[2 1 320]"
|
||||
python3 {path_to_openvino}/mo.py --input_model rnnt_joint.onnx --input "0[1 1 1024],1[1 1 320]"
|
||||
```
|
||||
Please note that hardcoded value for sequence length = 157 was taken from the MLCommons, but conversion to IR preserves
|
||||
network [reshapeability](../../../../IE_DG/ShapeInference.md); this means you can change input shapes manually to any value either during conversion or
|
||||
inference.
|
||||
@@ -1,4 +1,4 @@
|
||||
# Convert PyTorch* YOLACT to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_YOLACT}
|
||||
# Convert PyTorch* YOLACT to the Intermediate Representation {#openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT}
|
||||
|
||||
You Only Look At CoefficienTs (YOLACT) is a simple, fully convolutional model for real-time instance segmentation.
|
||||
The PyTorch\* implementation is publicly available in [this GitHub* repository](https://github.com/dbolya/yolact).
|
||||
@@ -52,10 +52,14 @@ limitations under the License.
|
||||
<tab type="usergroup" title="Converting Your ONNX* Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX">
|
||||
<tab type="user" title="Convert ONNX* Faster R-CNN Model to the Intermediate Representation" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Faster_RCNN"/>
|
||||
<tab type="user" title="Convert ONNX* Mask R-CNN Model to the Intermediate Representation" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_Mask_RCNN"/>
|
||||
<tab type="user" title="Converting DLRM ONNX* Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_DLRM"/>
|
||||
<tab type="user" title="Convert PyTorch* QuartzNet Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_QuartzNet"/>
|
||||
<tab type="user" title="Convert PyTorch* YOLACT Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_YOLACT"/>
|
||||
<tab type="user" title="Convert PyTorch* F3Net Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_F3Net"/>
|
||||
<tab type="user" title="Convert ONNX* GPT-2 Model to the Intermediate Representation" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_GPT2"/>
|
||||
<tab type="user" title="Convert DLRM ONNX* Model to the Intermediate Representation" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_onnx_specific_Convert_DLRM"/>
|
||||
<tab type="usergroup" title="Converting Your PyTorch* Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch">
|
||||
<tab type="user" title="Convert PyTorch* QuartzNet Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_QuartzNet"/>
|
||||
<tab type="user" title="Convert PyTorch* RNN-T Model " url="@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_RNNT"/>
|
||||
<tab type="user" title="Convert PyTorch* YOLACT Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_YOLACT"/>
|
||||
<tab type="user" title="Convert PyTorch* F3Net Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_pytorch_specific_Convert_F3Net"/>
|
||||
</tab>
|
||||
</tab>
|
||||
<tab type="user" title="Model Optimizations Techniques" url="@ref openvino_docs_MO_DG_prepare_model_Model_Optimization_Techniques"/>
|
||||
<tab type="user" title="Cutting off Parts of a Model" url="@ref openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model"/>
|
||||
|
||||
@@ -21,7 +21,8 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu
|
||||
|
||||
Prebuilt images are available on:
|
||||
- [Docker Hub](https://hub.docker.com/u/openvino)
|
||||
- [Quay.io](https://quay.io/organization/openvino)
|
||||
- [Red Hat* Quay.io](https://quay.io/organization/openvino)
|
||||
- [Red Hat* Ecosystem Catalog](https://catalog.redhat.com/software/containers/intel/openvino-runtime/606ff4d7ecb5241699188fb3)
|
||||
|
||||
## Use Docker* Image for CPU
|
||||
|
||||
|
||||
@@ -1,30 +1,28 @@
|
||||
# Create a Yocto* Image with OpenVINO™ toolkit {#openvino_docs_install_guides_installing_openvino_yocto}
|
||||
This document provides instructions for creating a Yocto* image with OpenVINO™ toolkit.
|
||||
|
||||
Instructions were validated and tested for [Yocto OpenVINO 2020.4 release](http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel).
|
||||
# Create a Yocto Image with Intel® Distribution of OpenVINO™ toolkit {#openvino_docs_install_guides_installing_openvino_yocto}
|
||||
This document provides instructions for creating a Yocto image with Intel® Distribution of OpenVINO™ toolkit.
|
||||
|
||||
## System Requirements
|
||||
Use the [Yocto Project* official documentation](https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#brief-compatible-distro) to set up and configure your host machine to be compatible with BitBake*.
|
||||
Use the [Yocto Project official documentation](https://docs.yoctoproject.org/brief-yoctoprojectqs/index.html#compatible-linux-distribution) to set up and configure your host machine to be compatible with BitBake.
|
||||
|
||||
## Setup
|
||||
## Step 1: Set Up Environment
|
||||
|
||||
### Set up Git repositories
|
||||
### Set Up Git Repositories
|
||||
The following Git repositories are required to build a Yocto image:
|
||||
|
||||
- [Poky](https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#poky)
|
||||
- [Meta-intel](http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel/tree/README)
|
||||
- [Poky](https://git.yoctoproject.org/poky)
|
||||
- [Meta-intel](https://git.yoctoproject.org/meta-intel/tree/README)
|
||||
- [Meta-openembedded](http://cgit.openembedded.org/meta-openembedded/tree/README)
|
||||
- <a href="https://github.com/kraj/meta-clang/blob/master/README.md">Meta-clang</a>
|
||||
|
||||
Clone these Git repositories to your host machine:
|
||||
```sh
|
||||
git clone https://git.yoctoproject.org/git/poky
|
||||
git clone https://git.yoctoproject.org/git/meta-intel
|
||||
git clone https://git.openembedded.org/meta-openembedded
|
||||
git clone https://github.com/kraj/meta-clang.git
|
||||
git clone https://git.yoctoproject.org/git/poky --branch hardknott
|
||||
git clone https://git.yoctoproject.org/git/meta-intel --branch hardknott
|
||||
git clone https://git.openembedded.org/meta-openembedded --branch hardknott
|
||||
git clone https://github.com/kraj/meta-clang.git --branch hardknott
|
||||
```
|
||||
|
||||
### Set up BitBake* Layers
|
||||
### Set up BitBake Layers
|
||||
|
||||
```sh
|
||||
source poky/oe-init-build-env
|
||||
@@ -36,7 +34,7 @@ bitbake-layers add-layer ../meta-clang
|
||||
|
||||
### Set up BitBake Configurations
|
||||
|
||||
Include extra configuration in conf/local.conf in your build directory as required.
|
||||
Include extra configuration in `conf/local.conf` in your build directory as required.
|
||||
|
||||
```sh
|
||||
# Build with SSE4.2, AVX2 etc. extensions
|
||||
@@ -67,22 +65,22 @@ CORE_IMAGE_EXTRA_INSTALL_append = " openvino-inference-engine-vpu-firmware"
|
||||
CORE_IMAGE_EXTRA_INSTALL_append = " openvino-model-optimizer"
|
||||
```
|
||||
|
||||
## Build a Yocto Image with OpenVINO Packages
|
||||
## Step 2: Build a Yocto Image with OpenVINO Packages
|
||||
|
||||
Run BitBake to build the minimal image with OpenVINO packages:
|
||||
Run BitBake to build your image with OpenVINO packages. To build the minimal image, for example, run:
|
||||
```sh
|
||||
bitbake core-image-minimal
|
||||
```
|
||||
|
||||
## Verify the Created Yocto Image with OpenVINO Packages
|
||||
## Step 3: Verify the Yocto Image with OpenVINO Packages
|
||||
|
||||
Verify that OpenVINO packages were built successfully.
|
||||
Run 'oe-pkgdata-util list-pkgs | grep openvino' command.
|
||||
Run the following command:
|
||||
```sh
|
||||
oe-pkgdata-util list-pkgs | grep openvino
|
||||
```
|
||||
|
||||
Verify that it returns the list of packages below:
|
||||
If the image was built successfully, it will return the list of packages as below:
|
||||
```sh
|
||||
openvino-inference-engine
|
||||
openvino-inference-engine-dbg
|
||||
@@ -90,7 +88,6 @@ openvino-inference-engine-dev
|
||||
openvino-inference-engine-python3
|
||||
openvino-inference-engine-samples
|
||||
openvino-inference-engine-src
|
||||
openvino-inference-engine-staticdev
|
||||
openvino-inference-engine-vpu-firmware
|
||||
openvino-model-optimizer
|
||||
openvino-model-optimizer-dbg
|
||||
|
||||
@@ -1,20 +1,6 @@
|
||||
/*
|
||||
* Copyright 2017-2019 Intel Corporation.
|
||||
* The source code, information and material ("Material") contained herein is
|
||||
* owned by Intel Corporation or its suppliers or licensors, and title to such
|
||||
* Material remains with Intel Corporation or its suppliers or licensors.
|
||||
* The Material contains proprietary information of Intel or its suppliers and
|
||||
* licensors. The Material is protected by worldwide copyright laws and treaty
|
||||
* provisions.
|
||||
* No part of the Material may be used, copied, reproduced, modified, published,
|
||||
* uploaded, posted, transmitted, distributed or disclosed in any way without
|
||||
* Intel's prior express written permission. No license under any patent,
|
||||
* copyright or other intellectual property rights in the Material is granted to
|
||||
* or conferred upon you, either expressly, by implication, inducement, estoppel
|
||||
* or otherwise.
|
||||
* Any license under such intellectual property rights must be express and
|
||||
* approved by Intel in writing.
|
||||
*/
|
||||
// Copyright (C) 2018-2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include "XLinkStringUtils.h"
|
||||
|
||||
|
||||
@@ -1,20 +1,6 @@
|
||||
/*
|
||||
* Copyright 2017-2019 Intel Corporation.
|
||||
* The source code, information and material ("Material") contained herein is
|
||||
* owned by Intel Corporation or its suppliers or licensors, and title to such
|
||||
* Material remains with Intel Corporation or its suppliers or licensors.
|
||||
* The Material contains proprietary information of Intel or its suppliers and
|
||||
* licensors. The Material is protected by worldwide copyright laws and treaty
|
||||
* provisions.
|
||||
* No part of the Material may be used, copied, reproduced, modified, published,
|
||||
* uploaded, posted, transmitted, distributed or disclosed in any way without
|
||||
* Intel's prior express written permission. No license under any patent,
|
||||
* copyright or other intellectual property rights in the Material is granted to
|
||||
* or conferred upon you, either expressly, by implication, inducement, estoppel
|
||||
* or otherwise.
|
||||
* Any license under such intellectual property rights must be express and
|
||||
* approved by Intel in writing.
|
||||
*/
|
||||
// Copyright (C) 2018-2020 Intel Corporation
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include "mvnc_data.h"
|
||||
#include "mvnc_tool.h"
|
||||
|
||||
Reference in New Issue
Block a user