* Mk/ov pybind poc (#48)
* Move _pyngraph module to ngraph.pyngraph
* Stub for IECore and IENetwork classes
* Additional API classes
* passing test, extended functions and added blob
* worksave
* [POC Python API] Add IECore methods
* [POC Python API] Add iecore tests
* ienetwork
* ienet, input_info, dataptr
* irequest and exeecnet
* adapted benchmark
* Move _pyngraph module to ngraph.pyngraph
* Stub for IECore and IENetwork classes
* Additional API classes
* passing test, extended functions and added blob
* worksave
* [POC Python API] Add IECore methods
* [POC Python API] Add iecore tests
* ienetwork
* ienet, input_info, dataptr
* irequest and exeecnet
* adapted benchmark
* Added suport for InputInfo bindings
* Add Blob support for different types
* fix typo
* Fixed InputInfo maps
* Add keys() to inputs maps
* add uint8 blob
* return read_network as it should be
* return read_network as it should be
* fix blob buffer
* remove const input_info files and fix codestyle
* add mode parameter in benchmark app
* return _pyngraph
* delete benchmark copy
* return pyngraph as in master
* fix benchmark working
* add comment with api which need to implement
* remove unnecessary code from benchmark
* remove hardcoded path from setup.py
* Rename vars in setup.py
* working wheel
* fix wheel building
* Revert "working wheel"
This reverts commit 11d03a1833.
* fix tests
* Added async infer
* pass by ref
* add ccompiler to requirements
* fix blob creation and view
* replace abscent method with working code in benchmark
* fix building
* worksave
* worksave queue
* no-deadlock async infer
* add lock handle in waitAll
* fix building issues with includes
* update of setup and cmakelist
* fix setup.py way of building
* add new methods for ie_core
* add ienetwork methods: serizlize and getfunction
* add methods for exec net and infer request class
* remove ccompiler from requirements
* remove set from cmake
* Update Blob class with precisions
* Rewrite test_write_numpy_scalar_int64
* Generic Blob casting in infer queue
* implementation of preprocess_info
* update license
* add set_blob method
* worksave
* added template for setblob
* Added blob convert in infer request
* move blob casting to common namespace
* add_outputs method
* work with func from pyopenvino
* add user_id to callbacks
* remove hardcoded root dir
* refactor code and comments
* [Python API] use parametrize in blob tests
* move common functions to conftest file
* Add tests for blob
* Update test_blob and test_network
* add parametrize in blob tests
* blob refactoring
* Fix sync in InferQueue and add default callbacks
* patch for protobuf cmake
* blob refactoring
* rename convert_to_blob to cast_to_blob
* rename to cast_to_blob in infer queue
* add missing cast_to_blob
* remove boost
* change license
* undo in cmake
* fix building
* [IE PYTHON API POC] Add fixed InferQueue and modification in async part of benchmark
* Add read_network(model,blob)
* Add blob_from_file helper
* Add read from Path
* Add tests
* Add read_network from bytes
* Error throwing in Common IE functions
* Cleaning samples
* Changes in ConstInputInfoWrapper class
* Add StatusCode to callback function for InferRequest
* Add random image generation and model path getting
* Move example model to examples path
* Adapt sync and async examples to new helpers
* Return request info containing StatusCode and ID from InferQueue for top idle request.
* Update benchmark app to use new API with request info
* Update examples to use two different approaches to InferQueue
* reset new line
* Add is_ready() to InferQueue
* fix building
* remove benchmark
* temporary add separate flag for building poc
* refactoring
* Remove GIL acquire in default callback and latencies
* Adapt benchmark to Core()
* Codestyle
* fix building
* [Python API] Move ngraph python api to the new destination
* fix building tests
* fix code-style checks
* building in azure
* fix building wheels
* apply fixes
* new structure
* fix building
* Add support for InferRequest::Cancel in pyopenvino
* fixes
remove gil release
add async infer after cancel
* remove extra files
* remove examples and benchmark
* fix code style
* fix building
* fix tests
* merge inits from old and new api
* fix azure ci
* fix setup.py
* fix setup.py building
* try to fix mac
Co-authored-by: Michal Karzynski <michal.karzynski@intel.com>
Co-authored-by: jiwaszki <jan.iwaszkiewicz@intel.com>
Co-authored-by: anastasia.kuporosova <akuporos@akuporos.inn.intel.com>
Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
* Fix comments
* fix comments: cmakelist
* fix building on arm
* permission for merge script
* permission for merge script
Co-authored-by: Michal Karzynski <michal.karzynski@intel.com>
Co-authored-by: jiwaszki <jan.iwaszkiewicz@intel.com>
Co-authored-by: anastasia.kuporosova <akuporos@akuporos.inn.intel.com>
Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
This toolkit allows developers to deploy pre-trained deep learning models
through a high-level C++ Inference Engine API integrated with application logic.
This open source version includes several components: namely Model Optimizer, nGraph and
Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.
It supports pre-trained models from the Open Model Zoo, along with 100+ open
source and public models in popular formats such as Caffe*, TensorFlow*,
MXNet* and ONNX*.
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0.
By contributing to the project, you agree to the license and copyright terms therein
and release your contribution under these terms.