* Separeted SavedModelVariablesIndex class from Saved Model
* Renamed SavedModelVariablesIndex class
* Enabled Tensorflow MetaGraph
* Enabled Tensorflow MetaGraph
* Covered VariableV2 and Assign nodes
* Applied review comments
* Added tests
* Added names to input/output ports too
* Fixed naming for using with MO
* Applied part of review comments
* Renamed meta.cpp and saved_model.cpp
* Applied shared_ptr for memory management of PtrNode
* Fixing CI
* Prevent cycles while passing thru graph
* Released requirement for Checkpointable Object Graph
* Changed naming approach to align port order
* Changed renaming order (before reordering)
* Added a Placeholder translator which checks updated shape
* WA missing Identity name
* Fix CI and restored lost translators after rebase
* WA for output names
* Removing unused params after cutting a model
* Prevents crash in case VariableV2 appears in freezed model
* Fixed saved model in case no variables.index found, but
variables exists
* Changed approach for handling native formats support
* Aligned behavior with freezing .meta files
* Fixed behavior for cutting a model by input tensor
* Applied review comments
* fix: embedded export is available for embedded targets only
* [GNA] functional tests fix - embedded export should NOT be possible on non-embedded target
* [GNA] tests added/justified to process both negative and positive path
* [GPU] Fix i8 representation error for clamp due to overflow
Signed-off-by: Andrew Park <andrew.park@intel.com>
* Fix to not include in ocl code
Signed-off-by: Andrew Park <andrew.park@intel.com>
---------
Signed-off-by: Andrew Park <andrew.park@intel.com>
* fix input issuse of ScatterNDUpdate conformance test
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* fix typo and optimize temporary variable
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
---------
Signed-off-by: Hu Yuan2 <yuan2.hu@intel.com>
* add _streams_info_table in Executor config
* change useHyperThreading init value
* restore cmake
* fix comments
* add calling enableCpuPinning property
* fix judgment about number of sockets in init_stream
* fix test case compile issue
* fix ci test case fail issue
* modify GetPerformanceStreams calling position
* add affinity in get_cpu_pinning
* modify ecore judgement
* add no binding core on ADL
* fix ci issue, add get_num_numa_nodes()
* fix code style
* fix StreamsHasHigherPriority issue
* fix according to comments
* fix performance degression
* fix code style
* code style
* fix warning
* fix ci test failed
* fix ImportNetwork issue
* fix ci test case issue
* fix smoke_CachingSupportCase_CPU issue
* add ExportOptimalNumStreamsTest test
* modify test name
* modify ExportOptimalNumStreams test
---------
Co-authored-by: Chen Peter <peter.chen@intel.com>
* Update MULTI doc per current implementation
Signed-off-by: Peter Chen <peter.chen@intel.com>
* Update the description of Multi-Device execution mode
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Remove sample code and video
1. Remove the sample code for removed behaviors
2. Remove the video to avoid confusion
Signed-off-by: Peter Chen <peter.chen@intel.com>
---------
Signed-off-by: Peter Chen <peter.chen@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Intermediate state
* Remove old dyn batch path in the new api
* Remove legacy dyn batch support
* Remove dyn batch support field from the config
* Revert changes to the common part
* Revert accidental change in the test file
* Minor fixes
* Fix support for dyn batch without setting current
* Typo fix
* TypeRelaxed<>::clone_with_new_inputs thread safety fix
* Style
* Make TypeRelaxed<BaseOp>::clone_with_new_inputs copy node the same way as copy ctor of ov::Node
* Removed mutex field from intel_cpu::GraphContext
* Removed all about has_type_relaxed_ops field from the snippets subgraph
* Clonning test
* update auto architecture doc
* update auto architecture doc
* Apply suggestions from code review
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* update for comments
---------
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>