Files
openvino/inference-engine/samples/style_transfer_sample/README.md
Nikolay Tyukaev ef45b5da8d Doc Migration (master) (#1377)
* Doc Migration from Gitlab (#1289)

* doc migration

* fix

* Update FakeQuantize_1.md

* Update performance_benchmarks.md

* Updates graphs for FPGA

* Update performance_benchmarks.md

* Change DL Workbench structure (#1)

* Changed DL Workbench structure

* Fixed tags

* fixes

* Update ie_docs.xml

* Update performance_benchmarks_faq.md

* Fixes in DL Workbench layout

* Fixes for CVS-31290

* [DL Workbench] Minor correction

* Fix for CVS-30955

* Added nGraph deprecation notice as requested by Zoe

* fix broken links in api doxy layouts

* CVS-31131 fixes

* Additional fixes

* Fixed POT TOC

* Update PAC_Configure.md

PAC DCP 1.2.1 install guide.

* Update inference_engine_intro.md

* fix broken link

* Update opset.md

* fix

* added opset4 to layout

* added new opsets to layout, set labels for them

* Update VisionAcceleratorFPGA_Configure.md

Updated from 2020.3 to 2020.4

Co-authored-by: domi2000 <domi2000@users.noreply.github.com>
2020-07-20 17:36:08 +03:00

3.1 KiB

Neural Style Transfer C++ Sample

This topic demonstrates how to run the Neural Style Transfer sample application, which performs inference of style transfer models.

Note

: The OpenVINO™ toolkit does not include a pre-trained model to run the Neural Style Transfer sample. A public model from the Zhaw's Neural Style Transfer repository can be used. Read the Converting a Style Transfer Model from MXNet* topic from the Model Optimizer Developer Guide to learn about how to get the trained model and how to convert it to the Inference Engine format (*.xml + *.bin).

Note

: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.

Running

Running the application with the -h option yields the following usage message:

./style_transfer_sample --help
InferenceEngine:
    API version ............ <version>
    Build .................. <number>

style_transfer_sample [OPTION]
Options:

    -h                      Print a usage message
    -i "<path>"             Required. Path to a .bmp image file or a sequence of paths separated by spaces.
    -m "<path>"             Required. Path to an .xml file with a trained model.
    -d "<device>"           The target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The sample looks for a suitable plugin for the device specified.
    -mean_val_r,
    -mean_val_g,
    -mean_val_b             Mean values. Required if the model needs mean values for preprocessing and postprocessing

Running the application with the empty list of options yields the usage message given above and an error message.

To perform inference of an image using a trained model of NST network on Intel® CPUs, use the following command:

./style_transfer_sample -i <path_to_image>/cat.bmp -m <path_to_model>/1_decoder_FP32.xml

Sample Output

The application outputs an image (out1.bmp) or a sequence of images (out1.bmp, ..., out<N>.bmp) which are redrawn in style of the style transfer model used for sample.

See Also