* Add MulConvFusion transformation
This transformation is applied to a following graph:
```
+-------+ +----------+
| Input | | Constant |
+-------+ +----------+
| |
------ ------
| |
v v
+----------+ +---------+
| Multiply | | Weights |
+----------+ +---------+
| |
----------- ----------
| |
v v
+----------------+
| Convolution Op |
+----------------+
```
and converts it to:
```
+---------+ +----------+
| Weights | | Constant |
+---------+ +----------+
| |
------ ------
| |
v v
+-------+ +----------+
| Input | | Multiply |
+-------+ +----------+
| |
----------- ----------
| |
v v
+----------------+
| Convolution Op |
+----------------+
```
Since 'Weights' are constants in most cases, the right hand side gets constant folded,
and we eliminate Multiply node.
Ticket: 52283
* Handle GroupConvolution, ConvolutionBackpropData, GroupConvolutionBackpropData in separate transformations
* Handle dequantization subgraph
* add namespace
* add more ngraph namespace
* address review comments
* fix build issue due to implicit-const-int-float-conversion and remove unused lambda function
* just remove it instead of commenting out
Co-authored-by: FuhengWu@Oracle <fuheng.wu@oracle.com>
* Draft
* More tests
* to_string + advanced_syntax + more tests
* Coding style
* Add mean/scale - vector version with layout support
Vector version requires layout to be set
* Added comments to LayoutRank
* Removed unnecessary public API
- Removed setters
- Removed LayoutRank from public classes
* Review comments:
- Rename 'layouts' namespace to 'layout'
- 'get_index_by_name' - specify throw exception type
* rebasing the perf-modes-2021.3 to the 2021.4
Caveats:
the (explicit) setting #streams is not disabled (as it was before for experiments with DLBenchmark), and the logic slighlty differ (streamsSet)
(cherry picked from commit 1ae1edc0ed)
* overriding streams (to force the TPUT mode to the DLBenchnark)
(cherry picked from commit 7f506cda31)
* disabling reducing #streams to fully mimic baseline c4df94d42d of the 2021.3 (before experiments)
(cherry picked from commit 85073dd1dd)
* clang/identation
(cherry picked from commit 050a4155a9)
* splitting the Transformation to general and CPU specific.
Now hopefully,this fully mimics the baseline c4df94d42d of the 2021.3 (before experiments), as the streams reduce num (as well as early exit on GRU/LSTM/TensorIterator) is deisabled
(cherry picked from commit e98b2c1a67)
* disabling GRU/LSTM/TI + reducing of streams + 5D considered compute-limited only for int8
(cherry picked from commit 32b8d80dee)
* refactored to avoid compute_limited_ratio, reverted the reducing #streams, removed LSTM from limitations
(cherry picked from commit f2b972171b)
* isa-based threshold logic
(cherry picked from commit b218457e1a)
* mode->hint
(cherry picked from commit ec20aa8eca)
* optional PERFORMANCE_HINT_NUM_REQUESTS
(cherry picked from commit 5a3883e3f3)
* moving the perfHints to the common OV config class + initial tests (CPU only, as the actual AUTO/MULTI should be accommodated on the master)
(cherry picked from commit (then fixed)45bafe7d527f466507dea0693aeed51be4ebf776)
* AUTO support for PerfHints
* MULTI support for PerfHints
* Enabling Perf hints for the GPU plugin
* brushing settings output a bit
* disabling "throughput" perf hint being default (until OV 2.0)
* uncommenting the logic which was disabled to force the DLBenchmark to use the throughput mode by default
* removing dead and experimental code, and debug printfs
* clang/code-style
* code-review remarks
* Moved the output of the actual params that the hint produced to the right place
* aligning MULTI's GetConfig beh to HETERO's as captured in the preso (CVS-59960) ratified with the ArchForum
* clang
* benchmark_app brushing
* Update inference-engine/samples/benchmark_app/README.md
* propagating the perf hints thru one more scenario in the merged AUTO-MULTI
* fixed mispint
* Python benchmark_app update for perf hints
* addresssing reviewers comments on the python benchmark_app
* simplifying/brushing logic a bit
* refactor the heuristic to the separate file (to be shared with iGPU soon)
* refactor conversion of modes to the specific GPU config per feedback from Vladimir
* [47750] Validate conditional compilation with models from OMZ
* [47750] Remove model
* [47750] Use generator expression
* [47750] Use f-strings
* [47750] Use resolve() instead of abs_path()
* [47750] Use cmd_exec() instead of subprocess.check_output()
* [47750] Use download_models fixture in test_cc_collect, test_verify
* [47750] Update prepare_models
* [47750] Update test_infer
* [47750] Add models
* [47750] Use custom logger
* [47750] Refactor prepare_models usage
* [47750] Rename model_struct to model
* [47750] Update help description
* [47750] Add function description for prepare_models, prepare_omz_model
* [47750] Move OMZ_NUM_ATTEMPTS to global scope
* [47750] Rename models to model
* [47750] Add "type" property in model
* [47750] Add default path for cache
* [47750] Remove conversion to str
* [47750] Rename prepare_models to prepared_models
* [47750] Remove redundant expand_env_vars call
* [47750] Use lower case "omz" in test_config; do not use default value
* [47750] Use only prepared_models in tests, without models in arguments
* [47750] Remove "framework" property in test_config
* [47750] Use omz_path and omz_cache_dir fixtures
* [47750] Make omz_cache_dir optional
* [47750] Remove validate_path_arg for omz_cache_dir
* [47750] Add validate_path_arg and log.warning for omz_cache_dir
* [47750] Add default value for omz_repo
* [47750] Use OMZ_MODELS_PATH environment variable
* [47750] Use tmpdir instead of OMZ_MODELS_PATH; use precision in test_id_list
* [47750] Update README.md
* [47750] Remove model_path variable
* [47750] Remove try/except for omz_path
* [47750] Rename omz_path to omz_repo
* [Frontend][Paddle]Handle Exception in Op Conversion.
* [Frontend][Paddle]revise comments
* [Frontend][Paddle]add tests for error handling
* [Frontend]fix typo
* [Frontend][Paddle]relax model version check to 2.0.0
* [Frontend][Paddle]fix typo
* updated FasterRCNN and SSD analysis patterns
* updated tf od api conditions
* updated ssd patterns
* added more ssd topologies
* move preprocessor to tf od api condition
* update TF OD API conditions
* refactoring
* specify data type
* Add visitor api test
* Review ngraph op shell with type_prop tests
* Add op to list of trusted operations
* Change name of struct with information of inputs
* Add include of array data structure to fix windowds compilation error
* Add template plugin test class
* Remove usage of CoordinateTransform index function call from reference implementation
* Rename SLT test suite
* Add template plugin unit test
* Add serialization SLTs
* Add indentation on GatherTreeParams class data members
if port > port_num, the behavior of res[port] is undefined.
Signed-off-by: fengyi.sun <fengyi.sun@intel.com>
Reviewed-by: Wu, Jiangming <jiangming.wu@intel.com>]
* [Python API] Move ngraph python api to the new destination
* fix building tests
* fix code-style checks
* building in azure
* fix building wheels
* apply fixes