* configurations-for-ncs2: Updated instructions for Linux* with setup 97-myriad-usbboot.rules * removed install_NCS_udev_rules script. configurations-for-ncs2: Updated doc after review * Removed installation on NCS2 script * install_NCS_udev_rules: return script back * install_NCS_udev_rules: Added executive permission * install_NCS_udev_rules: Added newline to the end of file --------- Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2.6 KiB
Configurations for Intel® Neural Compute Stick 2
@sphinxdirective
.. _ncs guide:
@endsphinxdirective
Linux
Once you have OpenVINO™ Runtime installed, follow these steps to be able to work on NCS2:
-
Add the current Linux user to the
usersgroup:sudo usermod -a -G users "$(whoami)" -
Go to the install_dependencies directory:
cd <INSTALL_DIR>/install_dependencies/ -
Copy the
97-myriad-usbboot.rulesfile to the udev rules directory:sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/ -
Now reload udev rules with rules that you copied
sudo udevadm control --reload-rules sudo udevadm trigger sudo ldconfig -
You may need to reboot your machine for this to take effect.
You've completed all required configuration steps to perform inference on Intel® Neural Compute Stick 2.
.. _ncs guide macos:
@endsphinxdirective
macOS
These steps are required only if you want to perform inference on Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU.
To perform inference on Intel® Neural Compute Stick 2, the libusb library is required. You can build it from the source code or install using the macOS package manager you prefer: Homebrew, MacPorts or other.
For example, to install the libusb library using Homebrew, use the following command:
brew install libusb
You've completed all required configuration steps to perform inference on your Intel® Neural Compute Stick 2.
What’s Next?
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
Developing in Python:
- Start with tensorflow models with OpenVINO™
- Start with ONNX and PyTorch models with OpenVINO™
- Start with PaddlePaddle models with OpenVINO™
Developing in C++:
- [Image Classification Async C++ Sample](@ref openvino_inference_engine_samples_classification_sample_async_README)
- [Hello Classification C++ Sample](@ref openvino_inference_engine_samples_hello_classification_README)
- [Hello Reshape SSD C++ Sample](@ref openvino_inference_engine_samples_hello_reshape_ssd_README)