DOCS: New Tutorials homepage (#13051)
* New Homepage for Tutorials Modifying tutorials.md Adding separate article for notebooks installation guide: notebooks-installation.md Updated to version from 13.07.2022 https://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20220713220805/dist/rst_files/ * Update consts.py * Update consts.py * Updating tutorials to version from 13.09 Adding missing notebooks: 115, 203, 219, 220, 221, 222, 223, * Update docs/tutorials.md * Updating meta tags Updating meta description and keywords * Update notebooks-installation.md * Update tutorials.md
This commit is contained in:
parent
ad933ce320
commit
2e354d85c0
641
docs/notebooks-installation.md
Normal file
641
docs/notebooks-installation.md
Normal file
@ -0,0 +1,641 @@
|
||||
# Installation of OpenVINO™ Notebooks {#notebooks-installation}
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. _notebooks installation:
|
||||
|
||||
.. meta::
|
||||
:description: An installation guide for Jupyter notebooks on which Python
|
||||
tutorials run. The tutorials serve as introduction to the
|
||||
OpenVINO™ toolkit.
|
||||
:keywords: OpenVINO™ toolkit, Jupyter notebooks, Jupyter, Python, Python API,
|
||||
installation guide, tutorials, install notebooks, local
|
||||
installation, OpenVINO™ Notebooks, run notebooks
|
||||
|
||||
|
||||
The notebooks run almost anywhere, from browsers and desktops to even a cloud VM or a Docker container.
|
||||
Follow the guide below in order to run and manage the notebooks on your machine.
|
||||
|
||||
--------------------
|
||||
|
||||
Contents:
|
||||
|
||||
- `Installation Guide <#-installation-guide>`__
|
||||
- `Run the Notebooks <#-run-the-notebooks>`__
|
||||
- `Manage the notebooks <#-manage-the-notebooks>`__
|
||||
- `Troubleshooting <#-troubleshooting>`__
|
||||
- `FAQ <#-faq>`__
|
||||
|
||||
--------------------
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name="-installation-guide">
|
||||
|
||||
`Installation Guide`_
|
||||
=====================
|
||||
|
||||
The table below lists the supported operating systems and Python versions.
|
||||
|
||||
+-------------------------------------+--------------------------------+
|
||||
| Supported Operating System (64-bit) | `Python Version |
|
||||
| | (64-bit |
|
||||
| | ) <https://www.python.org/>`__ |
|
||||
+=====================================+================================+
|
||||
| Ubuntu 18.04 LTS | 3.6, 3.7, 3.8, 3.9 |
|
||||
+-------------------------------------+--------------------------------+
|
||||
| Ubuntu 20.04 LTS | 3.6, 3.7, 3.8, 3.9 |
|
||||
+-------------------------------------+--------------------------------+
|
||||
| Red Hat Enterprise Linux 8 | 3.6, 3.8, 3.9 |
|
||||
+-------------------------------------+--------------------------------+
|
||||
| CentOS 7 | 3.6, 3.7, 3.8, 3.9 |
|
||||
+-------------------------------------+--------------------------------+
|
||||
| macOS 10.15.x versions | 3.6, 3.7, 3.8, 3.9 |
|
||||
+-------------------------------------+--------------------------------+
|
||||
| Windows 10 Pro, Enterprise | 3.6, 3.7, 3.8, 3.9 |
|
||||
| or Education editions | |
|
||||
+-------------------------------------+--------------------------------+
|
||||
| Windows Server 2016 or higher | 3.6, 3.7, 3.8, 3.9 |
|
||||
+-------------------------------------+--------------------------------+
|
||||
|
||||
OpenVINO Notebooks also require Git. Follow the guide below for your
|
||||
operating system or environment.
|
||||
|
||||
`Installing prerequisites`_
|
||||
----------------------------
|
||||
|
||||
.. tab:: WINDOWS
|
||||
|
||||
1. **Install Python**
|
||||
|
||||
Download 64 bit version of Python software (3.6, 3.7, 3.8, 3.9) from `python.org`_.
|
||||
|
||||
.. _python.org: https://www.python.org/downloads/windows/
|
||||
|
||||
Run the installer by double clicking it. Follow the installation steps to set up the software.
|
||||
|
||||
While installing, make sure you check the box to *add Python to system PATH*.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Python software available in the Microsoft Store is not recommended. It may require additional packages.
|
||||
|
||||
|
||||
2. **Install GIT**
|
||||
|
||||
Download 64 bit version of GIT from `git-scm.org`_
|
||||
|
||||
.. _git-scm.org: https://github.com/git-for-windows/git/releases/download/v2.36.0.windows.1/Git-2.36.0-64-bit.exe
|
||||
|
||||
Run the installer by double clicking it. Follow the installation steps to set up the software.
|
||||
|
||||
|
||||
3. **Install C++ Redistributable (For Python 3.8 only)**
|
||||
|
||||
Download 64 bit version of C++ Redistributable from `here`_
|
||||
|
||||
.. _here: https://download.visualstudio.microsoft.com/download/pr/4100b84d-1b4d-487d-9f89-1354a7138c8f/5B0CBB977F2F5253B1EBE5C9D30EDBDA35DBD68FB70DE7AF5FAAC6423DB575B5/VC_redist.x64.exe
|
||||
|
||||
Run the installer by double clicking it. Follow the installation steps to set up the software.
|
||||
|
||||
|
||||
.. tab:: Linux Systems
|
||||
|
||||
1. **Install Python and GIT**
|
||||
|
||||
.. note::
|
||||
|
||||
Linux Systems may require installation of additional libraries.
|
||||
|
||||
The following installation steps should work on Ubuntu Desktop 18.04, 20.04, 20.10, and on Ubuntu Server.
|
||||
|
||||
.. code-block::
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get upgrade
|
||||
sudo apt-get install python3-venv build-essential python3-dev git-all
|
||||
|
||||
The following installation steps should work on a clean install of Red Hat, CentOS, Amazon Linux 2 or Fedora. If any issues occur, see the `Troubleshooting <#-troubleshooting>`__ section.
|
||||
|
||||
.. code-block::
|
||||
|
||||
sudo yum update
|
||||
sudo yum upgrade
|
||||
sudo yum install python36-devel mesa-libGL
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
1. **Install Python**
|
||||
|
||||
Download Python software (3.7, 3.8, 3.9) from `python.org`. For example, this `installer`_.
|
||||
|
||||
.. _installer: https://www.python.org/ftp/python/3.7.9/python-3.7.9-macosx10.9.pkg
|
||||
|
||||
Run the installer by double clicking it. Follow the installation steps to set up the software.
|
||||
|
||||
.. note::
|
||||
|
||||
Refer to the "Important Information" displayed during installation for information about SSL/TLS certificate validation and running the "Install Certificates.command". These certificates are required to run some of the notebooks.
|
||||
|
||||
.. tab:: Azure ML
|
||||
|
||||
.. note::
|
||||
|
||||
An Azure account and access to `Azure ML Studio <https://ml.azure.com/>`__ are required.
|
||||
|
||||
1. **Adding a Compute Instance**
|
||||
|
||||
In Azure ML Studio, `add a compute instance <https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-manage-compute-instance?tabs=python>`__ and pick any CPU-based instance. At least 4 CPU cores and 8GB of RAM are recommended.
|
||||
|
||||
|ml-studio-1|
|
||||
|
||||
2. **Start the Terminal**
|
||||
|
||||
Once the compute instance has started, open the terminal window and then follow the installation steps below.
|
||||
|
||||
|ml-studio-2|
|
||||
|
||||
.. tab:: Docker
|
||||
|
||||
To run the notebooks inside a Linux-based Docker container, use the Dockerfile:
|
||||
|
||||
.. code-block:: bash
|
||||
:caption: Source: https://github.com/openvinotoolkit/openvino_notebooks/blob/main/Dockerfile
|
||||
|
||||
FROM quay.io/thoth-station/s2i-thoth-ubi8-py38:v0.29.0
|
||||
|
||||
LABEL name="OpenVINO(TM) Notebooks" \
|
||||
maintainer="helena.kloosterman@intel.com" \
|
||||
vendor="Intel Corporation" \
|
||||
version="0.2.0" \
|
||||
release="2021.4" \
|
||||
summary="OpenVINO(TM) Developer Tools and Jupyter Notebooks" \
|
||||
description="OpenVINO(TM) Notebooks Container"
|
||||
|
||||
ENV JUPYTER_ENABLE_LAB="true" \
|
||||
ENABLE_MICROPIPENV="1" \
|
||||
UPGRADE_PIP_TO_LATEST="1" \
|
||||
WEB_CONCURRENCY="1" \
|
||||
THOTH_ADVISE="0" \
|
||||
THOTH_ERROR_FALLBACK="1" \
|
||||
THOTH_DRY_RUN="1" \
|
||||
THAMOS_DEBUG="0" \
|
||||
THAMOS_VERBOSE="1" \
|
||||
THOTH_PROVENANCE_CHECK="0"
|
||||
|
||||
USER root
|
||||
|
||||
# Upgrade NodeJS > 12.0
|
||||
# Install dos2unix for line end conversion on Windows
|
||||
RUN curl -sL https://rpm.nodesource.com/setup_14.x | bash - && \
|
||||
yum remove -y nodejs && \
|
||||
yum install -y nodejs mesa-libGL dos2unix libsndfile && \
|
||||
yum -y update-minimal --security --sec-severity=Important --sec-severity=Critical --sec-severity=Moderate
|
||||
|
||||
# Copying in override assemble/run scripts
|
||||
COPY .docker/.s2i/bin /tmp/scripts
|
||||
# Copying in source code
|
||||
COPY .docker /tmp/src
|
||||
COPY .ci/patch_notebooks.py /tmp/scripts
|
||||
|
||||
# Git on Windows may convert line endings. Run dos2unix to enable
|
||||
# building the image when the scripts have CRLF line endings.
|
||||
RUN dos2unix /tmp/scripts/*
|
||||
RUN dos2unix /tmp/src/builder/*
|
||||
|
||||
# Change file ownership to the assemble user. Builder image must support chown command.
|
||||
RUN chown -R 1001:0 /tmp/scripts /tmp/src
|
||||
USER 1001
|
||||
RUN mkdir /opt/app-root/notebooks
|
||||
COPY notebooks/ /opt/app-root/notebooks
|
||||
RUN /tmp/scripts/assemble
|
||||
RUN pip check
|
||||
USER root
|
||||
RUN dos2unix /opt/app-root/bin/*sh
|
||||
RUN yum remove -y dos2unix
|
||||
RUN chown -R 1001:0 .
|
||||
RUN chown -R 1001:0 /opt/app-root/notebooks
|
||||
USER 1001
|
||||
# RUN jupyter lab build
|
||||
CMD /tmp/scripts/run
|
||||
|
||||
|
||||
`Installing notebooks`_
|
||||
------------------------
|
||||
|
||||
.. tab:: WINDOWS
|
||||
|
||||
1. **Create a Virtual Environment**
|
||||
|
||||
If you already have installed *openvino-dev*, you may skip this step and proceed with the next one.
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m venv openvino_env
|
||||
|
||||
2. **Activate the Environment**
|
||||
|
||||
.. code-block::
|
||||
|
||||
openvino_env\Scripts\activate
|
||||
|
||||
|
||||
3. **Clone the Repository**
|
||||
|
||||
Using the --depth=1 option for git clone reduces download size.
|
||||
|
||||
.. code-block::
|
||||
|
||||
git clone --depth=1 https://github.com/openvinotoolkit/openvino_notebooks.git
|
||||
cd openvino_notebooks
|
||||
|
||||
4. **Upgrade PIP**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m pip install --upgrade pip
|
||||
|
||||
|
||||
5. **Install required packages**
|
||||
|
||||
.. code-block::
|
||||
|
||||
pip install -r requirements.txt
|
||||
|
||||
|
||||
6. **Install the virtualenv Kernel in Jupyter**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m ipykernel install --user --name openvino_env
|
||||
|
||||
|
||||
.. tab:: Linux Systems
|
||||
|
||||
1. **Create a Virtual Environment**
|
||||
|
||||
If you already have installed *openvino-dev*, you may skip this step and proceed with the next one.
|
||||
|
||||
.. code-block::
|
||||
|
||||
python3 -m venv openvino_env
|
||||
|
||||
2. **Activate the Environment**
|
||||
|
||||
.. code-block::
|
||||
|
||||
source openvino_env/bin/activate
|
||||
|
||||
3. **Clone the Repository**
|
||||
|
||||
Using the --depth=1 option for git clone reduces download size.
|
||||
|
||||
.. code-block::
|
||||
|
||||
git clone --depth=1 https://github.com/openvinotoolkit/openvino_notebooks.git
|
||||
cd openvino_notebooks
|
||||
|
||||
4. **Upgrade PIP**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m pip install --upgrade pip
|
||||
|
||||
|
||||
5. **Install required packages**
|
||||
|
||||
.. code-block::
|
||||
|
||||
pip install -r requirements.txt
|
||||
|
||||
6. **Install the virtualenv Kernel in Jupyter**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m ipykernel install --user --name openvino_env
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
1. **Create a Virtual Environment**
|
||||
|
||||
If you already have installed *openvino-dev*, you may skip this step and proceed with the next one.
|
||||
|
||||
.. code-block::
|
||||
|
||||
python3 -m venv openvino_env
|
||||
|
||||
2. **Activate the Environment**
|
||||
|
||||
.. code-block::
|
||||
|
||||
source openvino_env/bin/activate
|
||||
|
||||
3. **Clone the Repository**
|
||||
|
||||
Using the --depth=1 option for git clone reduces download size.
|
||||
|
||||
.. code-block::
|
||||
|
||||
git clone --depth=1 https://github.com/openvinotoolkit/openvino_notebooks.git
|
||||
cd openvino_notebooks
|
||||
|
||||
4. **Upgrade PIP**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m pip install --upgrade pip
|
||||
|
||||
|
||||
5. **Install required packages**
|
||||
|
||||
.. code-block::
|
||||
|
||||
pip install -r requirements.txt
|
||||
|
||||
6. **Install the virtualenv Kernel in Jupyter**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m ipykernel install --user --name openvino_env
|
||||
|
||||
.. tab:: Azure ML
|
||||
|
||||
1. **Create a Virtual Environment**
|
||||
|
||||
If you already have installed *openvino-dev*, you may skip this step and proceed with the next one.
|
||||
|
||||
.. code-block::
|
||||
|
||||
python3 -m venv openvino_env
|
||||
|
||||
2. **Activate the Environment**
|
||||
|
||||
.. code-block::
|
||||
|
||||
source openvino_env/bin/activate
|
||||
|
||||
3. **Clone the Repository**
|
||||
|
||||
Using the --depth=1 option for git clone reduces download size.
|
||||
|
||||
.. code-block::
|
||||
|
||||
git clone --depth=1 https://github.com/openvinotoolkit/openvino_notebooks.git
|
||||
cd openvino_notebooks
|
||||
|
||||
4. **Upgrade PIP**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m pip install --upgrade pip
|
||||
|
||||
|
||||
5. **Install required packages**
|
||||
|
||||
.. code-block::
|
||||
|
||||
pip install -r requirements.txt
|
||||
|
||||
6. **Install the virtualenv Kernel in Jupyter**
|
||||
|
||||
.. code-block::
|
||||
|
||||
python -m ipykernel install --user --name openvino_env
|
||||
|
||||
.. tab:: Docker
|
||||
|
||||
1. **Clone the Repository**
|
||||
|
||||
.. code-block::
|
||||
|
||||
git clone https://github.com/openvinotoolkit/openvino_notebooks.git
|
||||
cd openvino_notebooks
|
||||
|
||||
2. **Build the Docker Image**
|
||||
|
||||
.. code-block::
|
||||
|
||||
docker build -t openvino_notebooks .
|
||||
|
||||
3. **Run the Docker Image**
|
||||
|
||||
.. code-block::
|
||||
|
||||
docker run -it -p 8888:8888 openvino_notebooks
|
||||
|
||||
.. note::
|
||||
|
||||
For using model training notebooks, allocate additional memory:
|
||||
|
||||
.. code-block::
|
||||
|
||||
docker run -it -p 8888:8888 --shm-size 8G openvino_notebooks
|
||||
|
||||
4. **Start the browser**
|
||||
|
||||
Copy the URL printed in the terminal window and open in a browser. |br|
|
||||
If it is a remote machine, replace 127.0.0.1 with the correct IP address.
|
||||
|
||||
|docker-terminal-1|
|
||||
|
||||
The Dockerfile can be used to run a local image on Windows, Linux or macOS.
|
||||
It is also compatible with Open Data Hub and Red Hat OpenShift Data Science.
|
||||
The base layer is a `UBI 8 <https://catalog.redhat.com/software/containers/ubi8/5c647760bed8bd28d0e38f9f?container-tabs=overview>`__-based image provided by `Project Thoth <https://thoth-station.ninja/>`__.
|
||||
|
||||
.. note::
|
||||
|
||||
While running the container on Windows and macOS, only CPU devices can be used. To access the iGPU, install the notebooks locally, following the instructions above.
|
||||
|
||||
|
||||
--------------------
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name="-run-the-notebooks"/>
|
||||
|
||||
|
||||
`Run the Notebooks`_
|
||||
====================
|
||||
|
||||
Launch a Single Notebook
|
||||
------------------------------
|
||||
|
||||
If you want to launch only one notebook, such as the *Monodepth* notebook, run the command below.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
jupyter 201-vision-monodepth.ipynb
|
||||
|
||||
Launch All Notebooks
|
||||
--------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
jupyter lab notebooks
|
||||
|
||||
In your browser, select a notebook from the file browser in Jupyter Lab, using the left sidebar. Each tutorial is located in a subdirectory within the ``notebooks`` directory.
|
||||
|
||||
|launch-jupyter|
|
||||
|
||||
|
||||
--------------------
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name="-manage-the-notebooks"/>
|
||||
|
||||
`Manage the Notebooks`_
|
||||
========================
|
||||
|
||||
Shut Down Jupyter Kernel
|
||||
---------------------------
|
||||
|
||||
To end your Jupyter session, press ``Ctrl-c``. This will prompt you to
|
||||
``Shutdown this Jupyter server (y/[n])?`` enter ``y`` and hit ``Enter``.
|
||||
|
||||
Deactivate Virtual Environment
|
||||
------------------------------------
|
||||
|
||||
First, make sure you use the terminal window where you activated ``openvino_env``. To deactivate your ``virtualenv``, simply run:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
deactivate
|
||||
|
||||
This will deactivate your virtual environment.
|
||||
|
||||
Reactivate Virtual Environment
|
||||
------------------------------------
|
||||
|
||||
To reactivate your environment, run:
|
||||
|
||||
.. tab:: WINDOWS
|
||||
|
||||
.. code:: bash
|
||||
|
||||
source openvino_env\Scripts\activate
|
||||
|
||||
.. tab:: Linux Systems
|
||||
|
||||
.. code:: bash
|
||||
|
||||
source openvino_env/bin/activate
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code:: bash
|
||||
|
||||
source openvino_env/bin/activate
|
||||
|
||||
|
||||
Then type ``jupyter lab`` or ``jupyter notebook`` to launch the notebooks again.
|
||||
|
||||
Delete Virtual Environment
|
||||
-------------------------------------
|
||||
|
||||
This operation is optional. However, if you want to remove your virtual environment, simply delete the ``openvino_env`` directory:
|
||||
|
||||
.. tab:: WINDOWS
|
||||
|
||||
.. code:: bash
|
||||
|
||||
rmdir /s openvino_env
|
||||
|
||||
.. tab:: Linux Systems
|
||||
|
||||
.. code:: bash
|
||||
|
||||
rm -rf openvino_env
|
||||
|
||||
.. tab:: macOS
|
||||
|
||||
.. code:: bash
|
||||
|
||||
rm -rf openvino_env
|
||||
|
||||
|
||||
Remove openvino_env Kernel from Jupyter
|
||||
-------------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
jupyter kernelspec remove openvino_env
|
||||
|
||||
|
||||
If you run into issues, check the `Troubleshooting <#-troubleshooting>`__, and `FAQs <#-faq>`__ sections or start a GitHub
|
||||
`discussion <https://github.com/openvinotoolkit/openvino_notebooks/discussions>`__.
|
||||
|
||||
-------------------
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name="-troubleshooting"/>
|
||||
|
||||
`Troubleshooting`_
|
||||
====================
|
||||
|
||||
- To check some common installation problems, run
|
||||
``python check_install.py``. This script is located in the
|
||||
openvino_notebooks directory. Run it after activating the
|
||||
``openvino_env`` virtual environment.
|
||||
- If you get an ``ImportError``, doublecheck that you installed the
|
||||
Jupyter kernel. If necessary, choose the ``openvino_env`` kernel from the
|
||||
*Kernel->Change Kernel* menu) in Jupyter Lab or Jupyter Notebook
|
||||
- If OpenVINO is installed globally, do not run installation commands
|
||||
in a terminal where ``setupvars.bat`` or ``setupvars.sh`` are sourced.
|
||||
- For Windows installation, it is recommended to use *Command Prompt
|
||||
(cmd.exe)*, not *PowerShell*.
|
||||
|
||||
If the following tips do not solve your problem, feel free to open a `discussion
|
||||
topic <https://github.com/openvinotoolkit/openvino_notebooks/discussions>`__
|
||||
or create an
|
||||
`issue <https://github.com/openvinotoolkit/openvino_notebooks/issues>`__! on Github.
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name="-faq"/>
|
||||
|
||||
`FAQ`_
|
||||
========
|
||||
|
||||
- `Which devices does OpenVINO
|
||||
support? <https://docs.openvino.ai/2022.1/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html>`__
|
||||
- `What is the first CPU generation that OpenVINO
|
||||
supports? <https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html>`__
|
||||
- `Are there any success stories about deploying real-world solutions
|
||||
with
|
||||
OpenVINO? <https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/success-stories.html>`__
|
||||
|
||||
--------------
|
||||
|
||||
`Additional Resources`_
|
||||
-------------------------
|
||||
|
||||
* `OpenVINO™ Notebooks - Github Repository <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md>`_
|
||||
* `Install OpenVINO™ Development Tools <https://docs.openvino.ai/nightly/openvino_docs_install_guides_install_dev_tools.html>`_
|
||||
|
||||
|
||||
.. |br| raw:: html
|
||||
|
||||
<br />
|
||||
|
||||
.. |launch-jupyter| image:: https://user-images.githubusercontent.com/15709723/120527271-006fd200-c38f-11eb-9935-2d36d50bab9f.gif
|
||||
.. |Apache License Version 2.0| image:: https://img.shields.io/badge/license-Apache_2.0-green.svg
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/blob/main/LICENSE
|
||||
.. |nbval| image:: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/nbval.yml/badge.svg
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/nbval.yml?query=branch%3Amain
|
||||
.. |nbval-docker| image:: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/docker.yml/badge.svg
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/nbval.yml?query=branch%3Amain
|
||||
.. |binder logo| image:: https://mybinder.org/badge_logo.svg
|
||||
:alt: Binder button
|
||||
|
||||
.. |ml-studio-1| image:: https://user-images.githubusercontent.com/15709723/117559437-17463180-b03a-11eb-9e8d-d4539d1502f2.png
|
||||
|
||||
.. |ml-studio-2| image:: https://user-images.githubusercontent.com/15709723/117582205-b6f4d580-b0b5-11eb-9b83-eb2004ad9b19.png
|
||||
|
||||
.. |docker-terminal-1| image:: https://user-images.githubusercontent.com/15709723/127793994-355e4d29-d131-432d-a12a-b08ca6131223.png
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -4,26 +4,465 @@
|
||||
|
||||
.. _notebook tutorials:
|
||||
|
||||
.. meta::
|
||||
:description: A collection of Python tutorials run on Jupyter notebooks. The
|
||||
tutorials explain how to use OpenVINO™ toolkit for optimized
|
||||
deep learning inference.
|
||||
:keywords: OpenVINO™ toolkit, Jupyter, Jupyter notebooks, tutorials, Python
|
||||
API, Python, deep learning, inference, model inference, infer a
|
||||
model, Binder, object detection, quantization, image
|
||||
classification, speech recognition, OCR, OpenVINO IR, deep
|
||||
learning model, AI, neural networks
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Notebooks
|
||||
:hidden:
|
||||
|
||||
notebooks-installation
|
||||
notebooks/notebooks
|
||||
|
||||
@endsphinxdirective
|
||||
This collection of Python tutorials are written for running on Jupyter notebooks.
|
||||
The tutorials provide an introduction to the OpenVINO™ toolkit and explain how to
|
||||
use the Python API and tools for optimized deep learning inference. You can run the
|
||||
code one section at a time to see how to integrate your application with OpenVINO
|
||||
libraries.
|
||||
|
||||
This collection of Python tutorials are written for running on [Jupyter*](https://jupyter.org) notebooks. The tutorials provide an introduction to the OpenVINO™ toolkit and explain how to use the Python API and tools for optimized deep learning inference. You can run the code one section at a time to see how to integrate your application with OpenVINO™ libraries.
|
||||
Notebooks with a |binder logo| button can be run without installing anything.
|
||||
Once you have found the tutorial of your interest, just click the button next to
|
||||
the name of it and `Binder <https://mybinder.org/>`__ will start it in a new tab of a browser.
|
||||
Binder is a free online service with limited resources (for more information about it,
|
||||
see the `Additional Resources <#-additional-resources>`__ section).
|
||||
|
||||
@sphinxdirective
|
||||
.. note::
|
||||
For the best performance, more control and resources, you should run the notebooks locally.
|
||||
Follow the `Installation Guide <notebooks-installation.html>`__ in order to get information
|
||||
on how to run and manage the notebooks on your machine.
|
||||
|
||||
|binder_link|
|
||||
|
||||
.. |binder_link| raw:: html
|
||||
--------------------
|
||||
|
||||
**Contents:**
|
||||
|
||||
- `Getting Started <#-getting-started>`__
|
||||
|
||||
- `First steps with OpenVINO <#-first-steps>`__
|
||||
- `Convert & Optimize <#-convert--optimize>`__
|
||||
- `Model Demos <#-model-demos>`__
|
||||
- `Model Training <#-model-training>`__
|
||||
- `Live Demos <#-live-demos>`__
|
||||
- `Recommended Tutorials <#-recommended-tutorials>`__
|
||||
- `Additional Resources <#-additional-resources>`__
|
||||
- `Contributors <#-contributors>`__
|
||||
|
||||
--------------------
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-getting-started' id='-getting-started'/>
|
||||
|
||||
`Getting Started`_
|
||||
==================
|
||||
|
||||
The Jupyter notebooks are categorized into four classes, select one
|
||||
related to your needs or give them all a try. Good Luck!
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-first-steps' id='-first-steps' />
|
||||
|
||||
|
||||
`First steps with OpenVINO`_
|
||||
-------------------------------
|
||||
|
||||
Brief tutorials that demonstrate how to use Python API for inference in OpenVINO.
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | Description | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `001-hello-world <notebooks/001-hello-world-with-output.html>`__ |br| |n001| | Classify an image with OpenVINO. | |n001-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `002-openvino-api <notebooks/002-openvino-api-with-output.html>`__ |br| |n002| | Learn the OpenVINO Python API. | |n002-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `003-hello-segmentation <notebooks/003-hello-segmentation-with-output.html>`__ |br| |n003| | Semantic segmentation with OpenVINO. | |n003-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `004-hello-detection <notebooks/004-hello-detection-with-output.html>`__ |br| |n004| | Text detection with OpenVINO. | |n004-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-convert--optimize' id='-convert--optimize'/>
|
||||
|
||||
`Convert & Optimize`_
|
||||
-----------------------
|
||||
|
||||
Tutorials that explain how to optimize and quantize models with OpenVINO tools.
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | Description | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `101-tensorflow-to-openvino <notebooks/101-tensorflow-to-openvino-with-output.html>`__ |br| |n101| | Convert TensorFlow models to OpenVINO IR. | |n101-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `102-pytorch-onnx-to-openvino <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__ | Convert PyTorch models to OpenVINO IR. | |n102-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `103-paddle-onnx-to-openvino <notebooks/103-paddle-onnx-to-openvino-classification-with-output.html>`__ |br| |n103| | Convert PaddlePaddle models to OpenVINO IR. | |n103-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `104-model-tools <notebooks/104-model-tools-with-output.html>`__ |br| |n104| | Download, convert and benchmark models from Open Model Zoo. | |n104-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
.. dropdown:: Explore more notebooks here.
|
||||
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Notebook | Description |
|
||||
+==============================================================================================================================+==================================================================================================================================+
|
||||
| `105-language-quantize-bert <notebooks/105-language-quantize-bert-with-output.html>`__ | Optimize and quantize a pre-trained BERT model |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `106-auto-device <notebooks/106-auto-device-with-output.html>`__ | Demonstrates how to use AUTO Device |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `107-speech-recognition-quantization <notebooks/107-speech-recognition-quantization-with-output.html>`__ | Optimize and quantize a pre-trained Wav2Vec2 speech model |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `110-ct-segmentation-quantize <notebooks/110-ct-segmentation-quantize-with-output.html>`__ | Quantize a kidney segmentation model and show live inference |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `111-detection-quantization <notebooks/111-detection-quantization-with-output.html>`__ |br| |n111| | Quantize an object detection model |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `112-pytorch-post-training-quantization-nncf <notebooks/112-pytorch-post-training-quantization-nncf-with-output.html>`__ | Use Neural Network Compression Framework (NNCF) to quantize PyTorch model in post-training mode (without model fine-tuning) |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `113-image-classification-quantization <notebooks/113-image-classification-quantization-with-output.html>`__ | Quantize mobilenet image classification |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `114-quantization-simplified-mode <notebooks/114-quantization-simplified-mode-with-output.html>`__ | Quantize Image Classification Models with POT in Simplified Mode |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
| `115-async-api <notebooks/115-async-api-with-output.html>`__ | Use Asynchronous Execution to Improve Data Pipelining |
|
||||
+------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-model-demos' id='-model-demos'/>
|
||||
|
||||
`Model Demos`_
|
||||
----------------
|
||||
|
||||
Demos that demonstrate inference on a particular model.
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | Description | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `210-ct-scan-live-inference <notebooks/210-ct-scan-live-inference-with-output.html>`__ |br| |n210| | Show live inference on segmentation of CT-scan data. | |n210-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `211-speech-to-text <notebooks/211-speech-to-text-with-output.html>`__ |br| |n211| | Run inference on speech-to-text recognition model. | |n211-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `208-optical-character-recognition <notebooks/208-optical-character-recognition-with-output.html>`__ | Annotate text on images using text recognition resnet. | |n208-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `209-handwritten-ocr <notebooks/209-handwritten-ocr-with-output.html>`__ |br| |n209| | OCR for handwritten simplified Chinese and Japanese. | |n209-img1| |br| |chinese-text| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `218-vehicle-detection-and-recognition <notebooks/218-vehicle-detection-and-recognition-with-output.html>`__ | Use pre-trained models to detect and recognize vehicles and their attributes with OpenVINO. | |n218-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
|
||||
|
||||
.. dropdown:: Explore more notebooks below.
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | Description | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `201-vision-monodepth <notebooks/201-vision-monodepth-with-output.html>`__ |br| |n201| | Monocular depth estimation with images and video. | |n201-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `202-vision-superresolution-image <notebooks/202-vision-superresolution-image-with-output.html>`__ |br| |n202i| | Upscale raw images with a super resolution model. | |n202i-img1| → |n202i-img2| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `202-vision-superresolution-video <notebooks/202-vision-superresolution-video-with-output.html>`__ |br| |n202v| | Turn 360p into 1080p video using a super resolution model. | |n202v-img1| → |n202v-img2| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `203-meter-reader <notebooks/203-meter-reader-with-output.html>`__ |br| |n203| | PaddlePaddle pre-trained models to read industrial meter's value | |n203-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `204-named-entity-recognition <notebooks/204-named-entity-recognition-with-output.html>`__ |br| |n204| | Perform named entity recognition on simple text. | |n204-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `205-vision-background-removal <notebooks/205-vision-background-removal-with-output.html>`__ |br| |n205| | Remove and replace the background in an image using salient object detection. | |n205-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `206-vision-paddlegan-anime <notebooks/206-vision-paddlegan-anime-with-output.html>`__ |br| |n206| | Turn an image into anime using a GAN. | |n206-img1| → |n206-img2| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `207-vision-paddlegan-superresolution <notebooks/207-vision-paddlegan-superresolution-with-output.html>`__ |br| |n207| | Upscale small images with superresolution using a PaddleGAN model. | |n207-img1| → |n207-img2| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `212-onnx-style-transfer <notebooks/212-onnx-style-transfer-with-output.html>`__ |br| |n212| | Transform images to five different styles with neural style transfer. | |n212-img1| → |n212-img2| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `214-vision-paddle-classification <notebooks/214-vision-paddle-classification-with-output.html>`__ |br| |n214| | PaddlePaddle Image Classification with OpenVINO. | |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `215-image-inpainting <notebooks/215-image-inpainting-with-output.html>`__ | Fill missing pixels with image in-painting. | |n215-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `216-license-plate-recognition <notebooks/216-license-plate-recognition-with-output.html>`__ | Recognize Chinese license plates in traffic. | |n216-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `217-vision-deblur <notebooks/217-vision-deblur-with-output.html>`__ |br| |n217| | Deblur Images with DeblurGAN-v2. | |n217-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `219-knowledge-graphs-conve <notebooks/219-knowledge-graphs-conve-with-output.html>`__ | Optimize the knowledge graph embeddings model (ConvE) with OpenVINO | |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `220-yolov5-accuracy-check-and-quantization <notebooks/220-yolov5-accuracy-check-and-quantization-with-output.html>`__ | Quantize the Ultralytics YOLOv5 model and check accuracy using the OpenVINO POT API | |n220-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `221-machine-translation <notebooks/221-machine-translation-with-output.html>`__ | Real-time translation from English to German | |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `222-vision-image-colorization <notebooks/222-vision-image-colorization-with-output.html>`__ | Use pre-trained models to colorize black & white images using OpenVINO | |n222-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `223-gpt2-text-prediction <notebooks/223-gpt2-text-prediction-with-output.html>`__ | Use GPT-2 to perform text prediction on an input sequence | |n223-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-model-training' id='-model-training' />
|
||||
|
||||
`Model Training`_
|
||||
------------------
|
||||
|
||||
Tutorials that include code to train neural networks.
|
||||
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | Description | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `301-tensorflow-training-openvino <notebooks/301-tensorflow-training-openvino-with-output.html>`__ | Train a flower classification model from TensorFlow, then convert to OpenVINO IR. | |n301-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `301-tensorflow-training-openvino-pot <notebooks/301-tensorflow-training-openvino-pot-with-output.html>`__ | Use Post-training Optimization Tool (POT) to quantize the flowers model. | |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `302-pytorch-quantization-aware-training <notebooks/302-pytorch-quantization-aware-training-with-output.html>`__ | Use Neural Network Compression Framework (NNCF) to quantize PyTorch model. | |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `305-tensorflow-quantization-aware-training <notebooks/305-tensorflow-quantization-aware-training-with-output.html>`__ | Use Neural Network Compression Framework (NNCF) to quantize TensorFlow model. | |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-live-demos' id='-live-demos' />
|
||||
|
||||
`Live Demos`_
|
||||
---------------
|
||||
|
||||
Live inference demos that run on a webcam or video files.
|
||||
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | Description | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `401-object-detection-webcam <notebooks/401-object-detection-with-output.html>`__ |br| |n401| | Object detection with a webcam or video file. | |n401-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `402-pose-estimation-webcam <notebooks/402-pose-estimation-with-output.html>`__ |br| |n402| | Human pose estimation with a webcam or video file. | |n402-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `403-action-recognition-webcam <notebooks/403-action-recognition-webcam-with-output.html>`__ |br| |n403| | Human action recognition with a webcam or video file. | |n403-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `405-paddle-ocr-webcam <notebooks/405-paddle-ocr-webcam-with-output.html>`__ |br| |n405| | OCR with a webcam or video file | |n405-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-recommended-tutorials' id='-recommended-tutorials'/>
|
||||
|
||||
`Recommended Tutorials`_
|
||||
--------------------------
|
||||
|
||||
The following tutorials are guaranteed to provide a great experience with inference in OpenVINO:
|
||||
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| Notebook | | Preview |
|
||||
+===============================================================================================================================+============================================================================================================================================+===========================================+
|
||||
| `Vision-monodepth <notebooks/201-vision-monodepth-with-output.html>`__ |br| |n201| | Monocular depth estimation with images and video. | |n201-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `CT-scan-live-inference <notebooks/210-ct-scan-live-inference-with-output.html>`__ |br| |n210| | Show live inference on segmentation of CT-scan data. | |n210-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `Object-detection-webcam <notebooks/401-object-detection-with-output.html>`__ |br| |n401| | Object detection with a webcam or video file. | |n401-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `Pose-estimation-webcam <notebooks/402-pose-estimation-with-output>`__ |br| |n402| | Human pose estimation with a webcam or video file. | |n402-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
| `Action-recognition-webcam <notebooks/403-action-recognition-webcam-with-output.html>`__ |br| |n403| | Human action recognition with a webcam or video file. | |n403-img1| |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------+
|
||||
|
||||
-------------------
|
||||
|
||||
.. note::
|
||||
If there are any issues while running the notebooks, refer to the **Troubleshooting** and **FAQ** sections in the `Installation Guide <notebooks-installation.html>`__ or start a GitHub
|
||||
`discussion <https://github.com/openvinotoolkit/openvino_notebooks/discussions>`__.
|
||||
|
||||
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-additional-resources' id='-additional-resources'/>
|
||||
|
||||
`Additional Resources`_
|
||||
-------------------------
|
||||
|
||||
* `OpenVINO™ Notebooks - Github Repository <https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md>`_
|
||||
* `Binder documentation <https://mybinder.readthedocs.io/en/latest/>`_
|
||||
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<a name='-contributors' id='-contributors' />
|
||||
|
||||
`Contributors`_
|
||||
--------------------------
|
||||
|
||||
|contributors|
|
||||
|
||||
Made with `contributors-img <https://contrib.rocks>`__.
|
||||
|
||||
|
||||
.. |br| raw:: html
|
||||
|
||||
<br />
|
||||
|
||||
.. |chinese-text| raw:: html
|
||||
|
||||
<span style="font-size:10px">的人不一了是他有为在责新中任自之我们</span>
|
||||
|
||||
.. |contributors| image:: https://contrib.rocks/image?repo=openvinotoolkit/openvino_notebooks
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/graphs/contributors
|
||||
.. |n001-img1| image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
|
||||
:target: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
|
||||
.. |n002-img1| image:: https://user-images.githubusercontent.com/15709723/127787560-d8ec4d92-b4a0-411f-84aa-007e90faba98.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/127787560-d8ec4d92-b4a0-411f-84aa-007e90faba98.png
|
||||
.. |n003-img1| image:: https://user-images.githubusercontent.com/15709723/128290691-e2eb875c-775e-4f4d-a2f4-15134044b4bb.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/128290691-e2eb875c-775e-4f4d-a2f4-15134044b4bb.png
|
||||
.. |n004-img1| image:: https://user-images.githubusercontent.com/36741649/128489933-bf215a3f-06fa-4918-8833-cb0bf9fb1cc7.jpg
|
||||
:target: https://user-images.githubusercontent.com/36741649/128489933-bf215a3f-06fa-4918-8833-cb0bf9fb1cc7.jpg
|
||||
.. |n101-img1| image:: https://user-images.githubusercontent.com/15709723/127779167-9d33dcc6-9001-4d74-a089-8248310092fe.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/127779167-9d33dcc6-9001-4d74-a089-8248310092fe.png
|
||||
.. |n102-img1| image:: https://user-images.githubusercontent.com/15709723/127779246-32e7392b-2d72-4a7d-b871-e79e7bfdd2e9.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/127779246-32e7392b-2d72-4a7d-b871-e79e7bfdd2e9.png
|
||||
.. |n103-img1| image:: https://user-images.githubusercontent.com/15709723/127779326-dc14653f-a960-4877-b529-86908a6f2a61.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/127779326-dc14653f-a960-4877-b529-86908a6f2a61.png
|
||||
.. |n104-img1| image:: https://user-images.githubusercontent.com/10940214/157541917-c5455105-b0d9-4adf-91a7-fbc142918015.png
|
||||
:target: https://user-images.githubusercontent.com/10940214/157541917-c5455105-b0d9-4adf-91a7-fbc142918015.png
|
||||
.. |n210-img1| image:: https://user-images.githubusercontent.com/15709723/134784204-cf8f7800-b84c-47f5-a1d8-25a9afab88f8.gif
|
||||
:target: https://user-images.githubusercontent.com/15709723/134784204-cf8f7800-b84c-47f5-a1d8-25a9afab88f8.gif
|
||||
.. |n211-img1| image:: https://user-images.githubusercontent.com/36741649/140987347-279de058-55d7-4772-b013-0f2b12deaa61.png
|
||||
:target: https://user-images.githubusercontent.com/36741649/140987347-279de058-55d7-4772-b013-0f2b12deaa61.png
|
||||
.. |n213-img1| image:: https://user-images.githubusercontent.com/4547501/152571639-ace628b2-e3d2-433e-8c28-9a5546d76a86.gif
|
||||
:target: https://user-images.githubusercontent.com/4547501/152571639-ace628b2-e3d2-433e-8c28-9a5546d76a86.gif
|
||||
.. |n208-img1| image:: https://user-images.githubusercontent.com/36741649/129315292-a37266dc-dfb2-4749-bca5-2ac9c1e93d64.jpg
|
||||
:target: https://user-images.githubusercontent.com/36741649/129315292-a37266dc-dfb2-4749-bca5-2ac9c1e93d64.jpg
|
||||
.. |n209-img1| image:: https://user-images.githubusercontent.com/36741649/132660640-da2211ec-c389-450e-8980-32a75ed14abb.png
|
||||
:target: https://user-images.githubusercontent.com/36741649/132660640-da2211ec-c389-450e-8980-32a75ed14abb.png
|
||||
.. |n201-img1| image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
|
||||
:target: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
|
||||
.. |n202i-img1| image:: https://github.com/openvinotoolkit/openvino_notebooks/raw/main/notebooks/202-vision-superresolution/data/tower.jpg
|
||||
:width: 70
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/raw/main/notebooks/202-vision-superresolution/data/tower.jpg
|
||||
.. |n202i-img2| image:: https://github.com/openvinotoolkit/openvino_notebooks/raw/main/notebooks/202-vision-superresolution/data/tower.jpg
|
||||
:width: 130
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/raw/main/notebooks/202-vision-superresolution/data/tower.jpg
|
||||
.. |n202v-img1| image:: https://user-images.githubusercontent.com/15709723/127269258-a8e2c03e-731e-4317-b5b2-ed2ee767ff5e.gif
|
||||
:target: https://user-images.githubusercontent.com/15709723/127269258-a8e2c03e-731e-4317-b5b2-ed2ee767ff5e.gif
|
||||
:width: 80
|
||||
.. |n202v-img2| image:: https://user-images.githubusercontent.com/15709723/127269258-a8e2c03e-731e-4317-b5b2-ed2ee767ff5e.gif
|
||||
:width: 125
|
||||
:target: https://user-images.githubusercontent.com/15709723/127269258-a8e2c03e-731e-4317-b5b2-ed2ee767ff5e.gif
|
||||
.. |n203-img1| image:: https://user-images.githubusercontent.com/91237924/166135627-194405b0-6c25-4fd8-9ad1-83fb3a00a081.jpg
|
||||
:target: https://user-images.githubusercontent.com/91237924/166135627-194405b0-6c25-4fd8-9ad1-83fb3a00a081.jpg
|
||||
.. |n204-img1| image:: https://user-images.githubusercontent.com/33627846/169470030-0370963e-6ad8-49e3-be7a-f02a2c677733.gif
|
||||
:target: https://user-images.githubusercontent.com/33627846/169470030-0370963e-6ad8-49e3-be7a-f02a2c677733.gif
|
||||
.. |n205-img1| image:: https://user-images.githubusercontent.com/15709723/125184237-f4b6cd00-e1d0-11eb-8e3b-d92c9a728372.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/125184237-f4b6cd00-e1d0-11eb-8e3b-d92c9a728372.png
|
||||
.. |n206-img1| image:: https://user-images.githubusercontent.com/15709723/127788059-1f069ae1-8705-4972-b50e-6314a6f36632.jpeg
|
||||
:target: https://user-images.githubusercontent.com/15709723/127788059-1f069ae1-8705-4972-b50e-6314a6f36632.jpeg
|
||||
.. |n206-img2| image:: https://user-images.githubusercontent.com/15709723/125184441-b4584e80-e1d2-11eb-8964-d8131cd97409.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/125184441-b4584e80-e1d2-11eb-8964-d8131cd97409.png
|
||||
.. |n207-img1| image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
|
||||
:target: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
|
||||
:width: 70
|
||||
.. |n207-img2| image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
|
||||
:target: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
|
||||
:width: 130
|
||||
.. |n212-img1| image:: https://user-images.githubusercontent.com/77325899/147358090-ff5b21f5-0efb-4aff-8444-9d07add49b92.png
|
||||
:target: https://user-images.githubusercontent.com/77325899/147358090-ff5b21f5-0efb-4aff-8444-9d07add49b92.png
|
||||
.. |n212-img2| image:: https://user-images.githubusercontent.com/77325899/147358009-0cf10d51-3150-40cb-a776-074558b98da5.png
|
||||
:target: https://user-images.githubusercontent.com/77325899/147358009-0cf10d51-3150-40cb-a776-074558b98da5.png
|
||||
.. |n215-img1| image:: https://user-images.githubusercontent.com/4547501/167121084-ec58fbdb-b269-4de2-9d4c-253c5b95de1e.png
|
||||
:target: https://user-images.githubusercontent.com/4547501/167121084-ec58fbdb-b269-4de2-9d4c-253c5b95de1e.png
|
||||
.. |n216-img1| image:: https://user-images.githubusercontent.com/70456146/162759539-4a0a996f-dabe-40ea-98d6-85b4dce8511d.png
|
||||
:target: https://user-images.githubusercontent.com/70456146/162759539-4a0a996f-dabe-40ea-98d6-85b4dce8511d.png
|
||||
.. |n217-img1| image:: https://user-images.githubusercontent.com/41332813/158430181-05d07f42-cdb8-4b7a-b7dc-e7f7d9391877.png
|
||||
:target: https://user-images.githubusercontent.com/41332813/158430181-05d07f42-cdb8-4b7a-b7dc-e7f7d9391877.png
|
||||
.. |n218-img1| image:: https://user-images.githubusercontent.com/47499836/163544861-fa2ad64b-77df-4c16-b065-79183e8ed964.png
|
||||
:target: https://user-images.githubusercontent.com/47499836/163544861-fa2ad64b-77df-4c16-b065-79183e8ed964.png
|
||||
.. |n220-img1| image:: https://user-images.githubusercontent.com/44352144/177097174-cfe78939-e946-445e-9fce-d8897417ef8e.png
|
||||
:target: https://user-images.githubusercontent.com/44352144/177097174-cfe78939-e946-445e-9fce-d8897417ef8e.png
|
||||
.. |n222-img1| image:: https://user-images.githubusercontent.com/18904157/166343139-c6568e50-b856-4066-baef-5cdbd4e8bc18.png
|
||||
:target: https://user-images.githubusercontent.com/18904157/166343139-c6568e50-b856-4066-baef-5cdbd4e8bc18.png
|
||||
.. |n223-img1| image:: https://user-images.githubusercontent.com/91228207/185105225-0f996b0b-0a3b-4486-872d-364ac6fab68b.png
|
||||
:target: https://user-images.githubusercontent.com/91228207/185105225-0f996b0b-0a3b-4486-872d-364ac6fab68b.png
|
||||
.. |n301-img1| image:: https://user-images.githubusercontent.com/15709723/127779607-8fa34947-1c35-4260-8d04-981c41a2a2cc.png
|
||||
:target: https://user-images.githubusercontent.com/15709723/127779607-8fa34947-1c35-4260-8d04-981c41a2a2cc.png
|
||||
.. |n401-img1| image:: https://user-images.githubusercontent.com/4547501/141471665-82b28c86-cf64-4bfe-98b3-c314658f2d96.gif
|
||||
:target: https://user-images.githubusercontent.com/4547501/141471665-82b28c86-cf64-4bfe-98b3-c314658f2d96.gif
|
||||
.. |n402-img1| image:: https://user-images.githubusercontent.com/4547501/138267961-41d754e7-59db-49f6-b700-63c3a636fad7.gif
|
||||
:target: https://user-images.githubusercontent.com/4547501/138267961-41d754e7-59db-49f6-b700-63c3a636fad7.gif
|
||||
.. |n403-img1| image:: https://user-images.githubusercontent.com/10940214/151552326-642d6e49-f5a0-4fc1-bf14-ae3f457e1fec.gif
|
||||
:target: https://user-images.githubusercontent.com/10940214/151552326-642d6e49-f5a0-4fc1-bf14-ae3f457e1fec.gif
|
||||
.. |n405-img1| image:: https://raw.githubusercontent.com/yoyowz/classification/master/images/paddleocr.gif
|
||||
:target: https://raw.githubusercontent.com/yoyowz/classification/master/images/paddleocr.gif
|
||||
.. |launch-jupyter| image:: https://user-images.githubusercontent.com/15709723/120527271-006fd200-c38f-11eb-9935-2d36d50bab9f.gif
|
||||
:target: https://user-images.githubusercontent.com/15709723/120527271-006fd200-c38f-11eb-9935-2d36d50bab9f.gif
|
||||
|
||||
.. |Apache License Version 2.0| image:: https://img.shields.io/badge/license-Apache_2.0-green.svg
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/blob/main/LICENSE
|
||||
.. |nbval| image:: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/nbval.yml/badge.svg
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/nbval.yml?query=branch%3Amain
|
||||
.. |nbval-docker| image:: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/docker.yml/badge.svg
|
||||
:target: https://github.com/openvinotoolkit/openvino_notebooks/actions/workflows/nbval.yml?query=branch%3Amain
|
||||
|
||||
.. |n001| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F001-hello-world%2F001-hello-world.ipynb
|
||||
.. |n002| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F002-openvino-api%2F002-openvino-api.ipynb
|
||||
.. |n003| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F003-hello-segmentation%2F003-hello-segmentation.ipynb
|
||||
.. |n004| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F004-hello-detection%2F004-hello-detection.ipynb
|
||||
.. |n101| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F101-tensorflow-to-openvino%2F101-tensorflow-to-openvino.ipynb
|
||||
.. |n103| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F103-paddle-onnx-to-openvino-classification%2F103-paddle-onnx-to-openvino-classification.ipynb
|
||||
.. |n104| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F104-model-tools%2F104-model-tools.ipynb
|
||||
.. |n111| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F111-detection-quantization%2F111-detection-quantization.ipynb
|
||||
.. |n210| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F210-ct-scan-live-inference%2F210-ct-scan-live-inference.ipynb
|
||||
.. |n211| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F211-speech-to-text%2F211-speech-to-text.ipynb
|
||||
.. |n213| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F213-question-answering%2F213-question-answering.ipynb
|
||||
.. |n209| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F209-handwritten-ocr%2F209-handwritten-ocr.ipynb
|
||||
.. |n201| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F201-vision-monodepth%2F201-vision-monodepth.ipynb
|
||||
.. |n202i| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F202-vision-superresolution%2F202-vision-superresolution-image.ipynb
|
||||
.. |n202v| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F202-vision-superresolution%2F202-vision-superresolution-video.ipynb
|
||||
.. |n203| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?labpath=notebooks%2F203-meter-reader%2F203-meter-reader.ipynb
|
||||
.. |n204| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F204-named-entity-recognition%2F204-named-entity-recognition.ipynb
|
||||
.. |n205| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F205-vision-background-removal%2F205-vision-background-removal.ipynb
|
||||
.. |n206| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F206-vision-paddlegan-anime%2F206-vision-paddlegan-anime.ipynb
|
||||
.. |n207| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F207-vision-paddlegan-superresolution%2F207-vision-paddlegan-superresolution.ipynb
|
||||
.. |n212| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F212-onnx-style-transfer%2F212-onnx-style-transfer.ipynb
|
||||
.. |n214| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F214-vision-paddle-classification%2F214-vision-paddle-classification.ipynb
|
||||
.. |n217| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/ThanosM97/openvino_notebooks/217-vision-deblur?labpath=notebooks%2F217-vision-deblur%2F217-vision-deblur.ipynb
|
||||
.. |n401| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F401-object-detection-webcam%2F401-object-detection.ipynb
|
||||
.. |n402| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F402-pose-estimation-webcam%2F402-pose-estimation.ipynb
|
||||
.. |n403| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F403-action-recognition-webcam%2F403-action-recognition-webcam.ipynb
|
||||
.. |n405| image:: https://mybinder.org/badge_logo.svg
|
||||
:target: https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F405-paddle-ocr-webcam%2F405-paddle-ocr-webcam.ipynb
|
||||
|
||||
.. |binder logo| image:: https://mybinder.org/badge_logo.svg
|
||||
:alt: Binder button
|
||||
|
||||
<a href="https://mybinder.org/v2/gh/openvinotoolkit/openvino_notebooks/HEAD?filepath=notebooks%2F001-hello-world%2F001-hello-world.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Binder"></a>
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
Tutorials showing this logo may be run remotely using Binder with no setup, although running the notebooks on a local system is recommended for best performance. See the [OpenVINO™ Notebooks Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md#-installation-guide) to install and run locally.
|
||||
|
Loading…
Reference in New Issue
Block a user