Feature/azaytsev/from 2021 4 (#9247)

* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Docs to Sphinx (#8151)

* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	.gitignore
#	docs/CMakeLists.txt
#	docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
#	docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
#	docs/IE_DG/Extensibility_DG/VPU_Kernel.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/Int8Inference.md
#	docs/IE_DG/Integrate_with_customer_application_new_API.md
#	docs/IE_DG/Model_caching_overview.md
#	docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
#	docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
#	docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/doxygen/Doxyfile.config
#	docs/doxygen/ie_docs.xml
#	docs/doxygen/ie_plugin_api.config
#	docs/doxygen/ngraph_cpp_api.config
#	docs/doxygen/openvino_docs.xml
#	docs/get_started/get_started_macos.md
#	docs/get_started/get_started_raspbian.md
#	docs/get_started/get_started_windows.md
#	docs/img/cpu_int8_flow.png
#	docs/index.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
#	docs/install_guides/deployment-manager-tool.md
#	docs/install_guides/installing-openvino-linux.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/optimization_guide/dldt_optimization_guide.md
#	inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
#	inference-engine/ie_bridges/python/docs/api_overview.md
#	inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
#	inference-engine/ie_bridges/python/sample/speech_sample/README.md
#	inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
#	inference-engine/include/ie_api.h
#	inference-engine/include/ie_core.hpp
#	inference-engine/include/ie_version.hpp
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/samples/speech_sample/README.md
#	inference-engine/src/plugin_api/exec_graph_info.hpp
#	inference-engine/src/plugin_api/file_utils.h
#	inference-engine/src/transformations/include/transformations_visibility.hpp
#	inference-engine/tools/benchmark_tool/README.md
#	ngraph/core/include/ngraph/ngraph.hpp
#	ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
#	ngraph/python/src/ngraph/utils/node_factory.py
#	openvino/itt/include/openvino/itt.hpp
#	thirdparty/ade
#	tools/benchmark/README.md

* Cherry-picked remove font-family (#8211)

* Cherry-picked: Update get_started_scripts.md (#8338)

* doc updates (#8268)

* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
#	docs/IE_DG/Bfloat16Inference.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/OnnxImporterTutorial.md
#	docs/IE_DG/supported_plugins/AUTO.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/ops/opset.md
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/tools/benchmark_tool/README.md
#	thirdparty/ade

* Cherry-picked: doc script changes (#8568)

* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)

* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc fixes (#8897)

* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: iframe video enable fullscreen (#9041)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: fix untitled titles (#9213)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: perf bench graph animation (#9045)

* animation

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: doc pytest (#8888)

* docs pytest

* fixes
# Conflicts:
#	docs/doxygen/doxygen-ignore.txt
#	docs/scripts/ie_docs.xml
#	thirdparty/ade

* Cherry-pick: restore deleted files (#9215)

* Added new operations to the doc structure (from removed ie_docs.xml)

* Additional fixes

* Update docs/IE_DG/InferenceEngine_QueryAPI.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Custom_Layers_Guide.md

* Changes according to review  comments

* doc scripts fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Int8Inference.md

* update xfail

* clang format

* updated xfail

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
This commit is contained in:
Andrey Zaytsev
2021-12-21 20:26:37 +03:00
committed by GitHub
parent 0c7089acc6
commit 4ae6258bed
670 changed files with 23447 additions and 15486 deletions

View File

@@ -0,0 +1,31 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import shutil
from pathlib import Path
def copy_images(input_dir: Path, output_dir: Path):
"""
Copy images from doxygen xml folder to sphinx folder
"""
output_dir.mkdir(parents=True, exist_ok=True)
extensions = ('*.png', '*.jpg', '*.svg', '*.gif', '*.PNG', '*.JPG', '*.SVG', '*.GIF')
for extension in extensions:
for file in input_dir.glob(extension):
shutil.copy(file, output_dir)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('input_dir', type=Path, help='Path to the folder containing images.')
parser.add_argument('output_dir', type=Path, help='Path to the output folder')
args = parser.parse_args()
input_dir = args.input_dir
output_dir = args.output_dir
copy_images(input_dir, output_dir)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,85 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import json
import logging
from lxml import etree
from pathlib import Path
REPOSITORIES = [
'openvino',
'omz'
]
def create_mapping(xml_input: Path, output_dir: Path, strip_path: Path):
"""
Create a mapping between doxygen label and file path for edit on github button.
"""
xml_input = xml_input.resolve()
output_dir = output_dir.resolve()
strip_path = strip_path.resolve()
mapping = {
'get_started': 'openvino/docs/get_started.md',
'documentation': 'openvino/docs/documentation.md',
'index': 'openvino/docs/index.rst',
'model_zoo': 'openvino/docs/model_zoo.md',
'resources': 'openvino/docs/resources.md',
'tutorials': 'openvino/docs/tutorials.md',
'tuning_utilities': 'openvino/docs/tuning_utilities.md'
}
output_dir.mkdir(parents=True, exist_ok=True)
xml_files = xml_input.glob('*.xml')
for xml_file in xml_files:
try:
root = etree.parse(xml_file.as_posix()).getroot()
compounds = root.xpath('//compounddef')
for compound in compounds:
kind = compound.attrib['kind']
if kind in ['file', 'dir']:
continue
name_tag = compound.find('compoundname')
name = name_tag.text
name = name.replace('::', '_1_1')
if kind == 'page':
exclude = True
for rep in REPOSITORIES:
if name.startswith(rep):
exclude = False
if exclude:
continue
else:
name = kind + name
location_tag = compound.find('location')
file = Path(location_tag.attrib['file'])
if not file.suffix:
continue
try:
file = file.relative_to(strip_path)
except ValueError:
logging.warning('{}: {} is not relative to {}.'.format(xml_file, file, strip_path))
mapping[name] = file.as_posix()
except AttributeError:
logging.warning('{}: Cannot find the origin file.'.format(xml_file))
except etree.XMLSyntaxError as e:
logging.warning('{}: {}.'.format(xml_file, e))
with open(output_dir.joinpath('mapping.json'), 'w') as f:
json.dump(mapping, f)
def main():
logging.basicConfig()
parser = argparse.ArgumentParser()
parser.add_argument('xml_input', type=Path, help='Path to the folder containing doxygen xml files')
parser.add_argument('output_dir', type=Path, help='Path to the output folder')
parser.add_argument('strip_path', type=Path, help='Strip from path')
args = parser.parse_args()
xml_input = args.xml_input
output_dir = args.output_dir
strip_path = args.strip_path
create_mapping(xml_input, output_dir, strip_path)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,135 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import re
import argparse
from pathlib import Path
import shutil
import logging
def get_label(file):
"""
Read lines of a file and try to find a doxygen label.
If the label is not found return None.
Assume the label is in the first line
"""
with open(file, 'r', encoding='utf-8') as f:
line = f.readline()
label = re.search(r'\{\#(.+)\}', line)
if label:
return label.group(1)
def replace_links(content, items, md_folder, labels, docs_folder):
"""
Replace markdown links with doxygen labels.
"""
for item in items:
link = item
link_path = md_folder.joinpath(link).resolve()
if os.path.exists(link_path):
content = content.replace(link, '@ref ' + labels[link_path])
else:
rel_path = os.path.relpath(link_path, docs_folder).replace('\\', '/')
content = content.replace(link, rel_path)
return content
def replace_image_links(content, images, input_dir, md_folder, output_dir):
for image in images:
new_path = md_folder.joinpath(image).resolve().relative_to(input_dir)
new_path = output_dir / new_path
content = content.replace(image, new_path.as_posix())
return content
def add_htmlonly(content):
content = content.replace('<details>', '\n\\htmlonly\n<details>')
content = content.replace('</summary>', '</summary>\n\\endhtmlonly')
content = content.replace('</details>', '\n\\htmlonly\n</details>\n\\endhtmlonly')
content = content.replace('<iframe', '\n\\htmlonly\n<iframe')
content = content.replace('</iframe>', '</iframe>\n\\endhtmlonly')
return content
def copy_file(file, content, input_dir, output_dir):
rel_path = file.relative_to(input_dir)
dest = output_dir.joinpath(rel_path)
dest.parents[0].mkdir(parents=True, exist_ok=True)
with open(dest, 'w', encoding='utf-8') as f:
f.write(content)
def copy_image(file, input_dir, output_dir):
rel_path = file.relative_to(input_dir)
dest = output_dir.joinpath(rel_path)
dest.parents[0].mkdir(parents=True, exist_ok=True)
try:
shutil.copy(file, dest)
except FileNotFoundError:
logging.warning('{}: file not found'.format(file))
def get_refs_by_regex(content, regex):
def map_func(path):
return (Path(path[0]), path[1]) if isinstance(path, tuple) else Path(path)
refs = set(map(map_func, re.findall(regex, content, flags=re.IGNORECASE)))
return refs
def process(input_dir, output_dir, exclude_dirs):
"""
Recursively find markdown files in docs_folder and
replace links to markdown files with doxygen labels (ex. @ref label_name).
"""
md_files = input_dir.glob('**/*.md')
md_files = filter(lambda x: not any(ex_path in x.parents for ex_path in exclude_dirs), md_files)
label_to_file_map = dict(filter(lambda x: x[1], map(lambda f: (Path(f), get_label(f)), md_files)))
label_to_file_map.pop(None, None)
for md_file in label_to_file_map.keys():
md_folder = md_file.parents[0]
with open(md_file, 'r', encoding='utf-8') as f:
content = f.read()
inline_links = set(re.findall(r'!?\[.*?\]\(([\w\/\-\.]+\.md)\)', content))
reference_links = set(re.findall(r'\[.+\]\:\s*?([\w\/\-\.]+\.md)', content))
inline_images = set(re.findall(r'!?\[.*?\]\(([\w\/\-\.]+\.(?:png|jpg|gif|svg))\)', content, flags=re.IGNORECASE))
reference_images = set(re.findall(r'\[.+\]\:\s*?([\w\/\-\.]+\.(?:png|jpg|gif|svg))', content, flags=re.IGNORECASE))
images = inline_images
images.update(reference_images)
content = replace_image_links(content, images, input_dir, md_folder, output_dir)
md_links = inline_links
md_links.update(reference_links)
md_links = list(filter(lambda x: md_folder.joinpath(x) in label_to_file_map, md_links))
content = replace_links(content, md_links, md_folder, label_to_file_map, input_dir)
# content = add_htmlonly(content)
copy_file(md_file, content, input_dir, output_dir)
for image in images:
path = md_file.parents[0].joinpath(image)
copy_image(path, input_dir, output_dir)
def main():
logging.basicConfig()
parser = argparse.ArgumentParser()
parser.add_argument('--input_dir', type=Path, help='Path to a folder containing .md files.')
parser.add_argument('--output_dir', type=Path, help='Path to the output folder.')
parser.add_argument('--exclude_dir', type=Path, action='append', default=[], help='Ignore a folder.')
args = parser.parse_args()
input_dir = args.input_dir
output_dir = args.output_dir
exclude_dirs = args.exclude_dir
output_dir.mkdir(parents=True, exist_ok=True)
process(input_dir, output_dir, exclude_dirs)
if __name__ == '__main__':
main()

99
docs/scripts/log.py Normal file
View File

@@ -0,0 +1,99 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import argparse
import os
import re
from distutils.util import strtobool
def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument('--log', type=str, required=True, default=None, help='Path to doxygen log file')
parser.add_argument('--ignore-list', type=str, required=False,
default=os.path.join(os.path.abspath(os.path.dirname(__file__)),'doxygen-ignore.txt'),
help='Path to doxygen ignore list')
parser.add_argument('--strip', type=str, required=False, default=os.path.abspath('../../'),
help='Strip from warning paths')
parser.add_argument('--include_omz', type=strtobool, required=False, default=False,
help='Include link check for omz docs')
parser.add_argument('--include_wb', type=strtobool, required=False, default=False,
help='Include link check for workbench docs')
parser.add_argument('--include_pot', type=strtobool, required=False, default=False,
help='Include link check for pot docs')
parser.add_argument('--include_gst', type=strtobool, required=False, default=False,
help='Include link check for gst docs')
return parser.parse_args()
def strip_path(path, strip):
"""Strip `path` components ends on `strip`
"""
path = path.replace('\\', '/')
if path.endswith('.md') or path.endswith('.tag'):
strip = os.path.join(strip, 'build/docs').replace('\\', '/') + '/'
else:
strip = strip.replace('\\', '/') + '/'
return path.split(strip)[-1]
def is_excluded_link(warning, exclude_links):
if 'unable to resolve reference to' in warning:
ref = re.findall(r"'(.*?)'", warning)
if ref:
ref = ref[0]
for link in exclude_links:
reg = re.compile(link)
if re.match(reg, ref):
return True
return False
def parse(log, ignore_list, strip, include_omz=False, include_wb=False, include_pot=False, include_gst=False):
found_errors = []
exclude_links = {'omz': r'.*?omz_.*?', 'wb': r'.*?workbench_.*?',
'pot': r'.*?pot_.*?', 'gst': r'.*?gst_.*?'}
if include_omz:
del exclude_links['omz']
if include_wb:
del exclude_links['wb']
if include_pot:
del exclude_links['pot']
if include_gst:
del exclude_links['gst']
exclude_links = exclude_links.values()
with open(ignore_list, 'r') as f:
ignore_list = f.read().splitlines()
with open(log, 'r') as f:
log = f.read().splitlines()
for line in log:
if 'warning:' in line:
path, warning = list(map(str.strip, line.split('warning:')))
path, line_num = path[:-1].rsplit(':', 1)
path = strip_path(path, strip)
if path in ignore_list or is_excluded_link(warning, exclude_links):
continue
else:
found_errors.append('{path} {warning} line: {line_num}'.format(path=path,
warning=warning,
line_num=line_num))
if found_errors:
print('\n'.join(found_errors))
exit(1)
def main():
args = parse_arguments()
parse(args.log,
args.ignore_list,
args.strip,
include_omz=args.include_omz,
include_wb=args.include_wb,
include_pot=args.include_pot,
include_gst=args.include_gst)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,60 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import re
import logging
import argparse
from lxml import etree
from pathlib import Path
from xml.sax import saxutils
def prepare_xml(xml_dir: Path):
"""
A preprocessing xml function
"""
pattern = r'\<sphinxdirective\>(.+?)\<\/sphinxdirective>'
xml_files = xml_dir.glob('*.xml')
for xml_file in xml_files:
try:
with open(xml_file, 'r', encoding='utf-8') as f:
contents = f.read()
matches = re.findall(pattern, contents, flags=re.DOTALL)
if matches:
for match in matches:
contents = contents.replace(match, saxutils.escape(match))
contents = str.encode(contents)
root = etree.fromstring(contents)
anchors = root.xpath('//anchor')
localanchors = list(filter(lambda x: x.attrib['id'].startswith('_1'), anchors))
for anc in localanchors:
text = anc.attrib['id']
heading = anc.getparent()
para = heading.getparent()
dd = para.getparent()
index = dd.index(para)
new_para = etree.Element('para')
sphinxdirective = etree.Element('sphinxdirective')
sphinxdirective.text = '\n\n.. _' + text[2:] + ':\n\n'
new_para.append(sphinxdirective)
dd.insert(index, new_para)
with open(xml_file, 'wb') as f:
f.write(etree.tostring(root))
except UnicodeDecodeError as err:
logging.warning('{}:{}'.format(xml_file, err))
def main():
logging.basicConfig()
parser = argparse.ArgumentParser()
parser.add_argument('xml_dir', type=Path, help='Path to the folder containing xml files.')
args = parser.parse_args()
xml_dir = args.xml_dir
prepare_xml(xml_dir)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,25 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import os
import shutil
def remove_xml_dir(path):
"""
Remove doxygen xml folder
"""
if os.path.exists(path):
shutil.rmtree(path, ignore_errors=True)
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('xml_dir')
args = parser.parse_args()
remove_xml_dir(args.xml_dir)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,125 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
""" Configuration for tests.
Tests for documentation utilize pytest test framework for tests execution
and reports generation.
Documentation generation tests process Doxygen log to generate test per
documentation source file (.hpp, .md, etc. files). Source files
with errors can be skipped (--doxygen-skip) or excluded temporary
(--doxygen-xfail).
Usage:
pytest --doxygen doxygen.log --html doc-generation.html test_doc-generation.py
"""
from inspect import getsourcefile
from contextlib import contextmanager
import os
import pytest
from distutils.util import strtobool
from utils.log import parse
def pytest_addoption(parser):
""" Define extra options for pytest options
"""
parser.addoption('--doxygen', help='Doxygen log path to run tests for')
parser.addoption(
'--doxygen-strip',
default='tmp_docs/',
help='Path to strip from paths found in doxygen log')
parser.addoption(
'--doxygen-xfail',
action='append',
default=[],
help='A file with relative paths to a files with known failures')
parser.addoption(
'--doxygen-skip',
action='append',
default=[],
help='A file with relative paths to a files to exclude from validation')
parser.addoption(
'--include_omz',
type=str,
required=False,
default='',
help='Include link check for omz docs')
parser.addoption(
'--include_wb',
type=str,
required=False,
default='',
help='Include link check for workbench docs')
parser.addoption(
'--include_pot',
type=str,
required=False,
default='',
help='Include link check for pot docs')
parser.addoption(
'--include_gst',
type=str,
required=False,
default='',
help='Include link check for gst docs')
parser.addoption(
'--include_ovms',
type=str,
required=False,
default='',
help='Include link check for ovms')
def read_lists(configs):
"""Read lines from files from configs. Return unique items.
"""
files = set()
for config_path in configs:
try:
with open(config_path, 'r') as config:
files.update(map(str.strip, config.readlines()))
except OSError:
pass
return list(files)
def pytest_generate_tests(metafunc):
""" Generate tests depending on command line options
"""
# read log
with open(metafunc.config.getoption('doxygen'), 'r') as log:
all_files = parse(log.read(), metafunc.config.getoption('doxygen_strip'))
exclude_links = {'open_model_zoo', 'workbench', 'pot', 'gst', 'omz', 'ovms'}
if metafunc.config.getoption('include_omz'):
exclude_links.remove('omz')
if metafunc.config.getoption('include_wb'):
exclude_links.remove('workbench')
if metafunc.config.getoption('include_pot'):
exclude_links.remove('pot')
if metafunc.config.getoption('include_gst'):
exclude_links.remove('gst')
if metafunc.config.getoption('include_ovms'):
exclude_links.remove('ovms')
filtered_keys = filter(lambda line: not any([line.startswith(repo) for repo in exclude_links]), all_files)
files = {key: all_files[key] for key in filtered_keys}
# read mute lists
marks = dict()
marks.update(
(name, pytest.mark.xfail)
for name in read_lists(metafunc.config.getoption('doxygen_xfail')))
marks.update(
(name, pytest.mark.skip)
for name in read_lists(metafunc.config.getoption('doxygen_skip')))
# generate tests
if 'doxygen_errors' in metafunc.fixturenames:
metafunc.parametrize(
'doxygen_errors', [
pytest.param(errors, marks=marks[file])
if file in marks else errors for file, errors in files.items()
],
ids=list(files.keys()))

View File

@@ -0,0 +1,14 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
""" Test for Doxygen based documentation generation.
Refer to conftest.py on the test usage.
"""
def test_documentation_page(doxygen_errors):
""" Test documentation page has no errors generating
"""
if doxygen_errors:
assert False, '\n'.join(['documentation has issues:'] +
sorted(doxygen_errors))

View File

@@ -0,0 +1,2 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

View File

@@ -0,0 +1,55 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
DoxygenLayout.xml parsing routines
"""
from collections import defaultdict
import argparse
from lxml import etree
import re
def parse_arguments():
"""
Parse arguments
"""
parser = argparse.ArgumentParser()
parser.add_argument('--layout', type=str, required=True, default=None, help='Path to DoxygenLayout.xml file')
return parser.parse_args()
def format_input(root):
"""
Format input
"""
for elem in root.getiterator():
if not hasattr(elem.tag, 'find'):
continue
elem.tag = re.sub(r'{.+}(.+)', r'\1', elem.tag)
def parse_layout(content):
"""
Parse DoxygenLayout.xml
"""
parser = etree.XMLParser(encoding='utf-8')
root = etree.fromstring(content, parser=parser)
format_input(root)
files = defaultdict(lambda: set())
md_links = filter(
lambda x: 'url' in x.attrib and x.attrib['url'].startswith('./') and x.attrib['url'].endswith('.md'),
root.xpath('//tab'))
for md_link in map(lambda x: x.attrib['url'], md_links):
link = md_link[2:] if md_link.startswith('./') else md_link
files[link] = set()
files[link].update(["The link to this file located in DoxygenLayout.xml is not converted to a doxygen reference ('@ref filename')"])
return files
if __name__ == '__main__':
arguments = parse_arguments()
with open(arguments.layout, 'r', encoding="utf-8") as f:
content = f.read()
parse_layout(content)

View File

@@ -0,0 +1,110 @@
# Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
""" Doxygen log parsing routines
"""
from collections import defaultdict
import argparse
import re
def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument('--doxygen', type=str, required=True, default=None, help='Path to doxygen.log file')
parser.add_argument('--doxygen-strip', type=str, required=False, default='tmp_docs/', help='Path to doxygen.log file')
return parser.parse_args()
def strip_timestmp(text):
"""Strip jenkins timestamp
"""
return text.split(']')[-1]
def strip_path(path, strip):
"""Strip `path` components ends on `strip`
"""
strip = strip.replace('\\', '/')
if not strip.endswith('/'):
strip = strip + '/'
new_path = path.split(strip)[-1]
if new_path.startswith('build/docs/'):
new_path = new_path.split('build/docs/')[-1]
return new_path
def _get_file_line(text):
"""Extracts file and line from Doxygen warning line
"""
if text:
location = text.split()[-1]
file_line = location.rsplit(':', 1)
if len(file_line) == 2:
return file_line
return '', ''
def parse(log, strip):
"""Extracts {file: errors} from doxygen log
"""
log = log.splitlines()
files = defaultdict(lambda: set()) # pylint: disable=unnecessary-lambda
idx = 0
prev_file = ''
prev_line = ''
while idx < len(log): # pylint: disable=too-many-nested-blocks
try:
log_line = strip_timestmp(log[idx]).strip()
processing_verb = next(
filter(log_line.startswith,
('Reading /', 'Parsing file /', 'Preprocessing /')),
None)
if processing_verb:
files[strip_path(log_line[len(processing_verb) - 1:-3],
strip)] = set()
elif 'warning:' in log_line:
warning = list(map(str.strip, log_line.split(': warning:')))
file, line = _get_file_line(warning[0])
file = strip_path(file, strip)
if len(warning) == 1:
file = prev_file
line = prev_line
error = warning[0]
else:
error = warning[1]
if error.endswith(':'):
continuation = []
while idx + 1 < len(log):
peek = strip_timestmp(log[idx + 1])
if not peek.startswith(' '):
break
continuation += [peek]
idx += 1
error += ';'.join(continuation)
if line:
error = '{error} (line: {line})'.format(
line=line, error=error)
if not file or 'deprecated' in file:
files['doxygen_errors'].update([error])
else:
prev_file = file
prev_line = line
files[file].update([error])
elif log_line.startswith('explicit link request') and 'in layout file' in log_line:
match = re.search(r"\'(.+?)\'", log_line)
if match:
file = match.group(1)
files[file].update([log_line])
else:
files['doxygen_errors'].update([log_line])
idx += 1
except:
print('Parsing error at line {}\n\n{}\n'.format(idx, log[idx]))
raise
return files
if __name__ == '__main__':
arguments = parse_arguments()
with open(arguments.doxygen, 'r') as log:
files = parse(log.read(), arguments.doxygen_strip)