[PyOV] Drop Python 3.6 support (#12280)

* Drop Python 3.6

* Test dropping Python to 3.6 in py_checks.yml

* Allow Python 3.6 for open source

* Add docs on upgrading Python
This commit is contained in:
Przemyslaw Wysocki
2022-09-22 11:58:12 +02:00
committed by GitHub
parent 2e6eaa6c7e
commit d5a274b0e4
26 changed files with 84 additions and 59 deletions

View File

@@ -14,7 +14,7 @@ import json
from pathlib import Path
if sys.hexversion < 0x3060000:
if sys.version_info[:2] < (3, 6):
raise Exception("Python version must be >= 3.6")

View File

@@ -18,7 +18,7 @@ jobs:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: 3.6
python-version: 3.7
- name: Cache pip
uses: actions/cache@v1

View File

@@ -620,13 +620,13 @@ It means that you are trying to convert a topology contains the `_contrib_box_nm
#### 99. What does the message "ModelOptimizer is not able to parse *.caffemodel" mean? <a name="question-99"></a>
If a `*.caffemodel` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "`utf-8` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.6/3.7 or build the `cpp` implementation of `protobuf` yourself for your version of Python. For the complete instructions about building `protobuf` from sources, see the appropriate section in the [Converting Models with Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
If a `*.caffemodel` file exists and is correct, the error occurred possibly because of the use of Python protobuf implementation. In some cases, error messages may appear during model parsing, for example: "`utf-8` codec can't decode byte 0xe0 in position 4: invalid continuation byte in field: mo_caffe.SpatialTransformerParameter.transform_type". You can either use Python 3.7 or build the `cpp` implementation of `protobuf` yourself for your version of Python. For the complete instructions about building `protobuf` from sources, see the appropriate section in the [Converting Models with Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
#### 100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean? <a name="question-100"></a>
The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (`mobilefacedet-v1-mxnet`, `brain-tumor-segmentation-0001`) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8.
The following workarounds are suggested to resolve this issue:
1. Use Python 3.6/3.7 to convert MXNet models on Windows
1. Use Python 3.7 to convert MXNet models on Windows
2. Update Apache MXNet by using `pip install mxnet==1.7.0.post2`
Note that it might have conflicts with previously installed PyPI dependencies.

View File

@@ -10,12 +10,8 @@ This guide provides steps on creating a Docker image with Intel® Distribution o
+----------------------------------------------+--------------------------+
| Operating System | Supported Python Version |
+==============================================+==========================+
| Ubuntu 18.04 long-term support (LTS), 64-bit | 3.6 |
+----------------------------------------------+--------------------------+
| Ubuntu 20.04 long-term support (LTS), 64-bit | 3.8 |
+----------------------------------------------+--------------------------+
| Red Hat Enterprise Linux 8, 64-bit | 3.6 |
+----------------------------------------------+--------------------------+
.. tab:: Host Operating Systems

View File

@@ -26,7 +26,7 @@
.. tab:: Software Requirements
* CMake 3.7.2 or higher
* Python 3.6-3.8, 32-bit
* Python 3.7-3.8, 32-bit
@endsphinxdirective

View File

@@ -13,11 +13,11 @@ Before you start the installation, check the supported operating systems and req
| Supported Operating System | [Python* Version (64-bit)](https://www.python.org/) |
| :------------------------------------------------------------| :---------------------------------------------------|
| Ubuntu* 18.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8 |
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8, 3.9 |
| Red Hat* Enterprise Linux* 8, 64-bit | 3.6, 3.8 |
| macOS* 10.15.x | 3.6, 3.7, 3.8, 3.9 |
| Windows 10*, 64-bit | 3.6, 3.7, 3.8, 3.9 |
| Ubuntu* 18.04 long-term support (LTS), 64-bit | 3.7, 3.8 |
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.7, 3.8, 3.9 |
| Red Hat* Enterprise Linux* 8, 64-bit | 3.8 |
| macOS* 10.15.x | 3.7, 3.8, 3.9 |
| Windows 10*, 64-bit | 3.7, 3.8, 3.9 |
**C++ libraries** are also required for the installation on Windows*. To install that, you can [download the Visual Studio Redistributable file (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe).

View File

@@ -12,11 +12,11 @@ Before you start the installation, check the supported operating systems and req
| Supported Operating System | [Python* Version (64-bit)](https://www.python.org/) |
| :------------------------------------------------------------| :---------------------------------------------------|
| Ubuntu* 18.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8 |
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8, 3.9 |
| Red Hat* Enterprise Linux* 8, 64-bit | 3.6, 3.8 |
| macOS* 10.15.x versions | 3.6, 3.7, 3.8, 3.9 |
| Windows 10*, 64-bit | 3.6, 3.7, 3.8, 3.9 |
| Ubuntu* 18.04 long-term support (LTS), 64-bit | 3.7, 3.8 |
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.7, 3.8, 3.9 |
| Red Hat* Enterprise Linux* 8, 64-bit | 3.8 |
| macOS* 10.15.x versions | 3.7, 3.8, 3.9 |
| Windows 10*, 64-bit | 3.7, 3.8, 3.9 |
**C++ libraries** are also required for the installation on Windows*. To install that, you can [download the Visual Studio Redistributable file (.exe)](https://aka.ms/vs/17/release/vc_redist.x64.exe).

View File

@@ -140,9 +140,9 @@ elif [ -f /etc/redhat-release ]; then
gstreamer1 \
gstreamer1-plugins-base
# Python 3.6 for Model Optimizer
sudo -E yum install -y rh-python36
source scl_source enable rh-python36
# Python 3.7 for Model Optimizer
sudo -E yum install -y rh-python37
source scl_source enable rh-python37
echo
echo "FFmpeg is required for processing audio and video streams with OpenCV. Please select your preferred method for installing FFmpeg:"

View File

@@ -15,12 +15,12 @@ Supported Python* versions:
| Operating System | Supported Python\* versions: |
|:----- | :----- |
| Ubuntu\* 18.04 | 3.6, 3.7 |
| Ubuntu\* 20.04 | 3.6, 3.7, 3.8 |
| Windows\* 10 | 3.6, 3.7, 3.8 |
| CentOS\* 7.3 | 3.6, 3.7 |
| macOS\* 10.x | 3.6, 3.7 |
| Raspbian\* 9 | 3.6, 3.7 |
| Ubuntu\* 18.04 | 3.7 |
| Ubuntu\* 20.04 | 3.7, 3.8 |
| Windows\* 10 | 3.7, 3.8 |
| CentOS\* 7.3 | 3.7 |
| macOS\* 10.x | 3.7 |
| Raspbian\* 9 | 3.7 |
## Set Up the Environment

View File

@@ -4,6 +4,7 @@
*To be added...*
##### Enviroment
In case the Python version you have is not supported by OpenVINO, you can refer to [openvino/src/bindings/python/docs/python_version_upgrade.md](https://github.com/openvinotoolkit/openvino/blob/master/src/bindings/python/docs/python_version_upgrade.md) for instructions on how to download and build a newer, supported Python version.
<!-- TODO: Link to enviroment setup -->
*To be added...*

View File

@@ -0,0 +1,37 @@
# Python version upgrade
#### Notes
Upgrade described in this documentation file can be useful when using a system such as Ubuntu18, which default Python is no longer supported (in this case Python 3.6). The recommended action is to use a newer system instead of upgrading Python.
*Warning: You make all changes at your own risk.*
## Building and installing Python for Linux
Download Python from Python releases page and extract it:
https://www.python.org/downloads/release
```bash
curl -O https://www.python.org/ftp/python/3.8.13/Python-3.8.13.tgz
tar -xf Python-3.8.13.tgz
```
Prepare the build with the `.configure` tool, ensuring that `pip` will be installed:
```bash
cd Python-3.8.13
./configure --with-ensurepip=install
```
Build Python with number of jobs suitable for your machine:
```bash
make -j 8
```
Install your new Python version, making sure not to overwrite system Python by using `altinstall` target:
```bash
sudo make altinstall
```
Verify your installation:
```bash
python3.8 --version
> Python 3.8.13
```

View File

@@ -78,7 +78,7 @@ show_column_numbers = True
show_error_context = True
show_absolute_path = True
pretty = True
follow_imports=normal
follow_imports = normal
disallow_untyped_defs = True
disallow_untyped_calls = True
check_untyped_defs = True

View File

@@ -1557,7 +1557,6 @@ def matmul(
:param transpose_b: should the second matrix be transposed
:return: MatMul operation node
"""
print("transpose_a", transpose_a, "transpose_b", transpose_b)
return _get_node_factory_opset1().create(
"MatMul", as_nodes(data_a, data_b), {"transpose_a": transpose_a, "transpose_b": transpose_b}
)

View File

@@ -2,8 +2,8 @@
- [CMake\*](https://cmake.org/download/) 3.9 or later
- Microsoft\* Visual Studio 2015 or later on Windows\*
- gcc 4.8 or later on Linux
- Python 2.7 or higher on Linux\*
- Python 3.6 or higher on Windows\*
- Python 3.7 or higher on Linux\*
- Python 3.7 or higher on Windows\*
## Prerequisites
@@ -21,9 +21,9 @@ You need to run Inference Engine build with the following flags:
cd <INSTALL_DIR>/openvino
mkdir -p build
cd build
cmake -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE=`which python3.6` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.6 ..
cmake -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE=`which python3.7` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7 ..
make -j16
```
@@ -65,7 +65,7 @@ sudo apt install patchelf
## Running sample
Before running the Python samples:
- add the folder with built `openvino` Python module (located at `bin/intel64/Release/lib/python_api/python3.6` for Linux) to the PYTHONPATH environment variable.
- add the folder with built `openvino` Python module (located at `bin/intel64/Release/lib/python_api/python3.7` for Linux) to the PYTHONPATH environment variable.
- add the folder with Inference Engine libraries to LD_LIBRARY_PATH variable on Linux (or PATH on Windows).
Example of command line to run classification sample:

View File

@@ -21,4 +21,4 @@ disable_error_code = attr-defined
show_column_numbers = True
show_error_context = True
show_absolute_path = True
pretty = True
pretty = True

View File

@@ -41,11 +41,11 @@ def pack_data(array: np.ndarray, type: Type) -> np.ndarray:
pad = (-data_size) % num_values_fitting_into_uint8
flattened = casted_to_regular_type.flatten()
padded = np.concatenate((flattened, np.zeros([pad], dtype=minimum_regular_dtype)))
padded = np.concatenate((flattened, np.zeros([pad], dtype=minimum_regular_dtype))) # type: ignore
assert padded.size % num_values_fitting_into_uint8 == 0
bit_order_little = (padded[:, None] & (1 << np.arange(num_bits)) > 0).astype(minimum_regular_dtype)
bit_order_big = np.flip(bit_order_little, axis=1)
bit_order_big = np.flip(bit_order_little, axis=1) # type: ignore
bit_order_big_flattened = bit_order_big.flatten()
return np.packbits(bit_order_big_flattened)
@@ -74,7 +74,7 @@ def unpack_data(array: np.ndarray, type: Type, shape: Union[list, Shape]) -> np.
else:
unpacked = unpacked.reshape(-1, type.bitwidth)
padding_shape = (unpacked.shape[0], 8 - type.bitwidth)
padding = np.ndarray(padding_shape, np.uint8)
padding = np.ndarray(padding_shape, np.uint8) # type: np.ndarray
if type == Type.i4:
for axis, bits in enumerate(unpacked):
if bits[0] == 1:
@@ -83,7 +83,7 @@ def unpack_data(array: np.ndarray, type: Type, shape: Union[list, Shape]) -> np.
padding[axis] = np.zeros((padding_shape[1],), np.uint8)
else:
padding = np.zeros(padding_shape, np.uint8)
padded = np.concatenate((padding, unpacked), 1)
padded = np.concatenate((padding, unpacked), 1) # type: ignore
packed = np.packbits(padded, 1)
if type == Type.i4:
return np.resize(packed, shape).astype(dtype=np.int8)

View File

@@ -15,13 +15,11 @@ from openvino.pyopenvino import InferRequest as InferRequestBase
from openvino.pyopenvino import AsyncInferQueue as AsyncInferQueueBase
from openvino.pyopenvino import ConstOutput
from openvino.pyopenvino import Tensor
from openvino.pyopenvino import Type
from openvino.pyopenvino import Shape
def tensor_from_file(path: str) -> Tensor:
"""Create Tensor from file. Data will be read with dtype of unit8."""
return Tensor(np.fromfile(path, dtype=np.uint8))
return Tensor(np.fromfile(path, dtype=np.uint8)) # type: ignore
def set_scalar_tensor(request: InferRequestBase, tensor: Tensor, key: Union[str, int, ConstOutput] = None) -> None:

View File

@@ -411,7 +411,7 @@ def compare_models(current, expected): # noqa: C901 the function is too complex
msg += f"expected: {expected_ops[i].get_output_element_type(idx)}. "
if not result:
print(msg)
print(msg) # noqa: T201
return result

View File

@@ -55,7 +55,7 @@ def test_load_by_unknown_framework():
try:
fem.load_by_framework("UnknownFramework")
except InitializationFailure as exc:
print(exc)
print(exc) # noqa: T201
else:
raise AssertionError("Unexpected exception.")

View File

@@ -82,4 +82,4 @@ def all_arrays_equal(first_list, second_list):
:param second_list: another iterable containing numpy ndarray objects
:return: True if all ndarrays are equal, otherwise False
"""
return all(map(lambda pair: np.array_equal(*pair), zip(first_list, second_list)))
return all(map(lambda pair: np.array_equal(*pair), zip(first_list, second_list))) # noqa: C417

View File

@@ -29,7 +29,7 @@ def test_compare_models():
status, _ = compare_models(model, model)
assert status
except RuntimeError:
print("openvino.test_utils.compare_models is not available")
print("openvino.test_utils.compare_models is not available") # noqa: T201
def generate_image(shape: Tuple = (1, 3, 32, 32), dtype: Union[str, np.dtype] = "float32") -> np.array:

View File

@@ -358,7 +358,7 @@ def remove_rpath(file_path):
def set_rpath(rpath, executable):
"""Setting rpath for linux and macOS libraries."""
print(f"Setting rpath {rpath} for {executable}") # noqa: T001
print(f"Setting rpath {rpath} for {executable}") # noqa: T001, T201
cmd = []
rpath_tool = ""

View File

@@ -7,10 +7,6 @@ import numpy as np
import sys
import os
# it's better to use PYTHON_PATH
# import sys
# sys.path.append('/home/itikhonov/OpenVINO/openvino/bin/intel64/Debug/lib/python_api/python3.6/')
# from openvino.inference_engine import IECore
def create_multi_output_model():
paddle.enable_static()

View File

@@ -51,8 +51,6 @@ if [[ $DISTRO == "centos" ]]; then
python_binary=python3.7
elif command -v python3.6 >/dev/null 2>&1; then
python_binary=python3.6
elif command -v python3.5 >/dev/null 2>&1; then
python_binary=python3.5
fi
else
python_binary=python3

View File

@@ -20,7 +20,7 @@ Post-Training Optimization Tool includes standalone command-line tool and Python
### System requirements
- Ubuntu 18.04 or later (64-bit)
- Python 3.6 or later
- Python 3.7 or later
- OpenVINO
### Installation (Temporary)

View File

@@ -14,7 +14,7 @@ What else can I do?</a>
- <a href="#quality">I have successfully quantized my model with a low accuracy drop and improved performance but the output video generated from the low precision model is much worse than from the full precision model. What could be the root cause?</a>
- <a href="#longtime">The quantization process of my model takes a lot of time. Can it be decreased somehow?</a>
- <a href="#import">I get "Import Error:... No such file or directory". How can I avoid it?</a>
- <a href="#python">When I execute POT CLI, I get "File "/workspace/venv/lib/python3.6/site-packages/nevergrad/optimization/base.py", line 35... SyntaxError: invalid syntax". What is wrong?</a>
- <a href="#python">When I execute POT CLI, I get "File "/workspace/venv/lib/python3.7/site-packages/nevergrad/optimization/base.py", line 35... SyntaxError: invalid syntax". What is wrong?</a>
- <a href="#nomodule">What does a message "ModuleNotFoundError: No module named 'some\_module\_name'" mean?</a>
- <a href="#dump">Is there a way to collect an intermidiate IR when the AccuracyAware mechanism fails?</a>
- <a name="#outputs"> What do the messages "Output name: <result_operation_name> not found" or "Output node with <result_operation_name> is not found in graph" mean?</a>
@@ -89,9 +89,9 @@ The following configuration parameters also impact the quantization time duratio
- `eval_requests_number`: the lower number, the more time might be required for the quantization
Note that higher values of `stat_requests_number` and `eval_requests_number` increase memory consumption by POT.
### <a name="python">When I execute POT CLI, I get "File "/workspace/venv/lib/python3.6/site-packages/nevergrad/optimization/base.py", line 35... SyntaxError: invalid syntax". What is wrong?</a>
### <a name="python">When I execute POT CLI, I get "File "/workspace/venv/lib/python3.7/site-packages/nevergrad/optimization/base.py", line 35... SyntaxError: invalid syntax". What is wrong?</a>
This error is reported when you have a Python version older than 3.6 in your environment. Upgrade your Python version.
This error is reported when you have a Python version older than 3.7 in your environment. Upgrade your Python version.
### <a name="nomodule">What does a message "ModuleNotFoundError: No module named 'some\_module\_name'" mean?</a>