[IE][VPU]: Configuration options in VPU plugins refactoring (#3211)

* [IE]: Enables Abstract class -> Parameter conversion support

Parameter has templated constructor allowing to write code

```
Parameter p = i; // i of type int for example
```

This constructor uses SFINAE to resolve ambiguity with
move-constructor, so checks that argument is not of the same type.
In case it's not the same type it calls std::tuple constructors that
constructs an instance of argument type. In the following case:

```
Parameter p = static_cast<Parameter>(abstractRef);
// abstractRef is a reference to abstract class
```

We have a reference to some abstract class that defines explicit
cast operator to Parameter. In contrast with expectations,
instead of cast operator, Parameter constructor is instantiated,
since template type deduction for Parameter constructor didn't fail
(abstract class has not the same type as Parameter). Instantiation
of tuple constructor used inside failed: it's impossible to create an
instance of abstract class what lead to compile-time error. To resolve
the issue additional condition introduced to check if argument type is
abstract.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE]: Enables PrintTo method for Parameter and tests on it

Inference Engine API for configuration options uses Parameter
type as a return type of GetConfig method. Parameter is intended
to store object associated with configuration option.
To support objects of different types its constructor is templated.
Parameter overloads cast operators which are templated
as well. Both constructor and cast operators are implicit, which
makes it possible to implicitly convert any type to Parameter
and vice versa.

Since Parameter is a part of Inference Engine configuration API it's
essential google tests on API contain Parameter as tests parameter.
For each test parameter Google Test framework tries to print it to
an output stream. For that purpose, Google Test checks if test
parameter has output stream operator or PrintTo method. If not, it
checks if it could be implicitly converted to integral type and,
in this case, prints it as a long integer.

InferenceEngine::Parameter does not define output stream operator,
but could be implicitly converted to an integer, according cast
operators mentioned above, so Google Test tries to convert to
integer. Since Parameter not necessarily contains integer, this
conversion throws an exception of type mismatch, which makes it
impossible to use Parameter in Google Test framework as is.

In order to resolve that issue Parameter should define either
output stream operator or PrintTo method. If Parameter will
define output stream operator it will make it possible to compile
streaming almost any object to an output stream. The reason for it
is C++ checks if object could be implicitly converted to other type
which defines output stream operator, if objects itself doesn't do it
(e.g. `stream << "text";` calls std::string::operator<<, since
char const* is implicitly convertible to std::string).

Taking this into consideration the only way to support Parameter in
Google Test without breaking backward compatibility is define PrintTo
method.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE]: Fixes ill-formed extending std names

According to the standard:

The behavior of a C++ program is undefined if
it adds declarations or definitions to namespace
std or to a namespace within namespace std unless
otherwise specified. A program may add a template
specialization for any standard library template
to namespace std only if the declaration depends
on a user-defined type and the specialization meets
the standard library requirements for the original
template and is not explicitly prohibited.

As as an unexpected result, InferenceEngine::Parameter
that contains std::vector<std::string> can be printed
via PrintTo. In that case operator<< version from
Inference Engine is picked up.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Moves CompilationConfig out of GT header

Keeping config in a separate header simplifies migration
to new interface.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Removes Platform enum

Since there is enum from MVNC for the same purpose
there is no need in Platform anyway

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Introduces containers utility header

Contains some helpers to work with C++ maps

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Introduces new configuration API

The main ideas are separate option-specific logic
from common container, automate logic processing
public vs private, deprecated, compile-time vs
runtime-time options and remove code duplication.

Since IE defines configuration API using std::string
and Parameter, options have to provide ways to be
represented as Parameter (ex.: GetConfig is called)
and be defined using std::string (ex.: SetConfig is
called). Keeping information about actual key value
is useful for error reporting.

New API fallbacks to previous version in case of
unsupported options are requested. This way migration
becomes iterative and looks simpler.

Options containers are related to corresponding components:
CompilationConfig (name to be changed) - GraphTransformer,
PluginConfiguration - base class for plugins configurations,
MyriadConfiguration - Myriad plugin configuration,
HDDLConfiguration - HDDL plugin configuration (to be
introduced in a separate request)

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Replaces CompilationConfig with PluginConfiguration

Some of options to be refactored are stored inside CompilationConfig.
CompilationConfig is passed to graph transformer as a compiler to be
processed. Since it's separate data structure and migration process
is iterative we need a mechanism to provide some of compilation
options from new interface and some from old. It cannot be done via
plugin specific class (MyriadConfiguration), since there are others
plugins as graph transformer users. Plugin specific class
(MyriadConfiguration) already inherits from old version (MyriadConfig),
which in turn inherits from ParsedConfig containing CompilationConfig.

To resolve the issue MyriadConfig inheritance from ParsedConfig is made
virtual to make it possible for PluginConfiguration to virtually inherit
from ParsedConfig as well an so make PluginConfiguration data structure
for configuration options for graph transformer. Since
PluginConfiguration is base class of MyriadConfiguration as well as
MyriadConfig and inheritance is virtual plugin just casts its specific
configuration to base one passing to graph transformer.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Enables new tests on configuration API

* Enables following new shared tests on configuration API
  * Can load network with empty configuration
  * Check default value for configuration option
  * Can load network with correct configuration
  * Check custom value for configuration option (set and compare)
  * Check public configuration options are visible through API
  * Check private configuration options are invisible through API
  * Check GetConfig throws an exception on incorrect key
* Refactors myriad plugin instantiations for shared tests

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Extracts LogLevel enum to a separate header

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Refactors LOG_LEVEL configuration option

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Refactors COPY_OPTIMIZATION configuration option

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Fixes behavior tests build

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Updates tests on new exception class

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Removes unused variable from mvnc test

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>

* [IE][VPU]: Removes SizeVector streaming call

New assertion macro IE_ASSERT implementation uses
output streaming operator with r-value reference
argument as a stream. This prevents the compiler
from picking up overload from InferenceEngine::details,
since our version takes stream by non-const l-value
reference.

Since there is no simple solution to provide output
streaming operator overload for r-value references as
well and this call is just a message for assert in
test utilities, it was decided just to remove call
for now.

Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
This commit is contained in:
Gladilov, Gleb 2021-06-17 18:54:39 +03:00 committed by GitHub
parent 3063afdbfc
commit e61a594199
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
92 changed files with 1767 additions and 590 deletions

View File

@ -86,7 +86,8 @@ public:
* @param parameter object
*/
template <class T,
typename = typename std::enable_if<!std::is_same<typename std::decay<T>::type, Parameter>::value>::type>
typename = typename std::enable_if<!std::is_same<typename std::decay<T>::type, Parameter>::value &&
!std::is_abstract<typename std::decay<T>::type>::value>::type>
Parameter(T&& parameter) { // NOLINT
static_assert(!std::is_same<typename std::decay<T>::type, Parameter>::value, "To prevent recursion");
ptr = new RealData<typename std::decay<T>::type>(std::forward<T>(parameter));
@ -254,6 +255,21 @@ public:
return !(*this == rhs);
}
/**
* @brief Prints underlying object to the given output stream.
* Uses operator<< if it is defined, leaves stream unchanged otherwise.
* In case of empty parameter or nullptr stream immediately returns.
*
* @param object Object to be printed to the given output stream.
* @param stream Output stream object will be printed to.
*/
friend void PrintTo(const Parameter& object, std::ostream* stream) {
if (object.empty() || !stream) {
return;
}
object.ptr->print(*stream);
}
private:
template <class T, class EqualTo>
struct CheckOperatorEqual {
@ -273,6 +289,24 @@ private:
template <class T, class EqualTo = T>
struct HasOperatorEqual : CheckOperatorEqual<T, EqualTo>::type {};
template <class T, class U>
struct CheckOutputStreamOperator {
template <class V, class W>
static auto test(W*) -> decltype(std::declval<V&>() << std::declval<W>(), std::true_type()) {
return {};
}
template <typename, typename>
static auto test(...) -> std::false_type {
return {};
}
using type = typename std::is_same<std::true_type, decltype(test<T, U>(nullptr))>::type;
};
template <class T>
struct HasOutputStreamOperator : CheckOutputStreamOperator<std::ostream, T>::type {};
struct Any {
#ifdef __ANDROID__
virtual ~Any();
@ -282,6 +316,7 @@ private:
virtual bool is(const std::type_info&) const = 0;
virtual Any* copy() const = 0;
virtual bool operator==(const Any& rhs) const = 0;
virtual void print(std::ostream&) const = 0;
};
template <class T>
@ -318,6 +353,20 @@ private:
bool operator==(const Any& rhs) const override {
return rhs.is(typeid(T)) && equal<T>(*this, rhs);
}
template <class U>
typename std::enable_if<!HasOutputStreamOperator<U>::value, void>::type
print(std::ostream& stream, const U& object) const {}
template <class U>
typename std::enable_if<HasOutputStreamOperator<U>::value, void>::type
print(std::ostream& stream, const U& object) const {
stream << object;
}
void print(std::ostream& stream) const override {
print<T>(stream, get());
}
};
template <typename T>

View File

@ -25,6 +25,9 @@
#include "ie_algorithm.hpp"
namespace InferenceEngine {
namespace details {
/**
* @brief Serializes a `std::vector` to a `std::ostream`
* @ingroup ie_dev_api_error_debug
@ -32,7 +35,6 @@
* @param vec A vector to serialize
* @return A reference to a `std::stream`
*/
namespace std {
template <typename T>
inline std::ostream& operator<<(std::ostream& out, const std::vector<T>& vec) {
if (vec.empty()) return std::operator<<(out, "[]");
@ -42,10 +44,7 @@ inline std::ostream& operator<<(std::ostream& out, const std::vector<T>& vec) {
}
return out << "]";
}
} // namespace std
namespace InferenceEngine {
namespace details {
/**
* @brief trim from start (in place)
* @ingroup ie_dev_api_error_debug

View File

@ -0,0 +1,99 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include <vector>
#include <map>
#include "caseless.hpp"
#include "vpu/utils/optional.hpp"
namespace vpu {
struct CompilationConfig {
int numSHAVEs = -1;
int numCMXSlices = -1;
int numExecutors = -1;
int tilingCMXLimitKB = -1;
bool hwOptimization = true;
bool hwExtraSplit = false;
std::string irWithVpuScalesDir;
std::string customLayers;
bool detectBatch = true;
Optional<bool> injectSwOps;
Optional<bool> packDataInCmx;
bool mergeHwPoolToConv = true;
bool hwDilation = false;
bool forceDeprecatedCnnConversion = false;
bool enableEarlyEltwiseReLUFusion = true;
std::map<std::string, std::vector<int>> ioStrides;
//
// Debug options
//
InferenceEngine::details::caseless_set<std::string> hwWhiteList;
InferenceEngine::details::caseless_set<std::string> hwBlackList;
bool hwDisabled(const std::string& layerName) const {
if (!hwWhiteList.empty()) {
return hwWhiteList.count(layerName) == 0;
}
if (!hwBlackList.empty()) {
return hwBlackList.count(layerName) != 0;
}
return false;
}
InferenceEngine::details::caseless_set<std::string> noneLayers;
bool skipAllLayers() const {
if (noneLayers.size() == 1) {
const auto& val = *noneLayers.begin();
return val == "*";
}
return false;
}
bool skipLayerType(const std::string& layerType) const {
return noneLayers.count(layerType) != 0;
}
bool ignoreUnknownLayers = false;
std::string dumpInternalGraphFileName;
std::string dumpInternalGraphDirectory;
bool dumpAllPasses;
bool disableReorder = false; // TODO: rename to enableReorder and switch logic.
bool disableConvertStages = false;
bool enablePermuteMerging = true;
bool enableReplWithSCRelu = false;
bool enableReplaceWithReduceMean = true;
bool enableTensorIteratorUnrolling = false;
bool forcePureTensorIterator = false;
bool enableMemoryTypesAnnotation = false;
bool enableWeightsAnalysis = true;
bool checkPreprocessingInsideModel = true;
bool enableCustomReshapeParam = false;
//
// Deprecated options
//
float inputScale = 1.0f;
float inputBias = 0.0f;
};
} // namespace vpu

View File

@ -0,0 +1,18 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include "ie_parameter.hpp"
template<class OptionConcept>
struct AsParsedParameterEnabler {
static InferenceEngine::Parameter asParameter(const std::string& value) { return {OptionConcept::parse(value)}; }
};
struct AsParameterEnabler {
static InferenceEngine::Parameter asParameter(const std::string& value);
};

View File

@ -0,0 +1,34 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include "vpu/configuration/as_parameter_enabler.hpp"
namespace vpu {
namespace details {
enum class Access;
enum class Category;
} // namespace details
class PluginConfiguration;
struct CopyOptimizationOption : public AsParsedParameterEnabler<CopyOptimizationOption> {
using value_type = bool;
static std::string key();
static void validate(const std::string&);
static void validate(const PluginConfiguration&);
static std::string defaultValue();
static value_type parse(const std::string&);
static details::Access access();
static details::Category category();
};
} // namespace vpu

View File

@ -0,0 +1,36 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include "vpu/configuration/as_parameter_enabler.hpp"
namespace vpu {
enum class LogLevel;
namespace details {
enum class Access;
enum class Category;
} // namespace details
class PluginConfiguration;
struct LogLevelOption : public AsParameterEnabler {
using value_type = LogLevel;
static std::string key();
static void validate(const std::string&);
static void validate(const PluginConfiguration&);
static std::string defaultValue();
static value_type parse(const std::string&);
static details::Access access();
static details::Category category();
};
} // namespace vpu

View File

@ -0,0 +1,142 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include <map>
#include <unordered_map>
#include <unordered_set>
#include <memory>
#include <vpu/parsed_config.hpp>
#include "ie_parameter.hpp"
#include "vpu/utils/logger.hpp"
namespace vpu {
class PluginConfiguration;
struct ConfigurationOptionConcept {
virtual std::string key() const = 0;
virtual void validate(const std::string&) const = 0;
virtual void validate(const PluginConfiguration&) const = 0;
virtual InferenceEngine::Parameter asParameter(const std::string&) const = 0;
};
namespace details {
template<class Option>
struct ConfigurationOptionModel : public ConfigurationOptionConcept {
std::string key() const override { return Option::key(); }
void validate(const std::string& value) const override { return Option::validate(value); }
void validate(const PluginConfiguration& options) const override { Option::validate(options); }
InferenceEngine::Parameter asParameter(const std::string& value) const override { return Option::asParameter(value); }
};
enum class Deprecation {
Off,
On
};
enum class Access {
Private,
Public
};
enum class Category {
CompileTime,
RunTime
};
class ConfigurationEntry {
public:
template<class Option>
ConfigurationEntry(Option, details::Deprecation deprecation)
: m_access(Option::access())
, m_deprecation(deprecation)
, m_category(Option::category())
, m_value(std::make_shared<ConfigurationOptionModel<Option>>())
{}
ConfigurationOptionConcept& get();
const ConfigurationOptionConcept& get() const;
std::string key() const;
bool isPrivate() const;
bool isDeprecated() const;
Category getCategory() const;
private:
Access m_access = Access::Public;
Deprecation m_deprecation = Deprecation::Off;
Category m_category = Category::CompileTime;
std::shared_ptr<ConfigurationOptionConcept> m_value;
};
} // namespace details
// TODO: remove virtual inheritance once all options are migrated
// it's needed to pass updated compilation config to graph transformer
class PluginConfiguration : public virtual ParsedConfig {
public:
PluginConfiguration();
void from(const std::map<std::string, std::string>& config);
void fromAtRuntime(const std::map<std::string, std::string>& config);
std::unordered_set<std::string> getPublicKeys() const;
bool supports(const std::string& key) const;
template<class Option>
void registerOption() {
const auto& key = Option::key();
concepts.emplace(key, details::ConfigurationEntry(Option{}, details::Deprecation::Off));
if (values.count(key) == 0) {
// option could be registered more than once if there are deprecated versions of it
values.emplace(key, Option::defaultValue());
}
}
template<class Option>
void registerDeprecatedOption(const std::string& deprecatedKey) {
const auto& key = Option::key();
concepts.emplace(deprecatedKey, details::ConfigurationEntry(Option{}, details::Deprecation::On));
if (values.count(key) == 0) {
// option could be registered more than once if there are deprecated versions of it
values.emplace(key, Option::defaultValue());
}
}
template<class Option>
typename Option::value_type get() const {
const auto& key = Option::key();
validate(key);
return Option::parse(values.at(key));
}
void set(const std::string& key, const std::string& value);
const std::string& operator[](const std::string& key) const;
InferenceEngine::Parameter asParameter(const std::string& key) const;
virtual void validate() const;
private:
std::unordered_map<std::string, details::ConfigurationEntry> concepts;
std::unordered_map<std::string, std::string> values;
Logger::Ptr logger;
enum class Mode {
Default,
RunTime
};
void create(const std::map<std::string, std::string>& config, Mode mode = Mode::Default);
void validate(const std::string& key) const;
};
} // namespace vpu

View File

@ -0,0 +1,15 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <string>
#include <unordered_map>
namespace vpu {
const std::unordered_map<std::string, bool>& string2switch();
const std::unordered_map<bool, std::string>& switch2string();
} // namespace vpu

View File

@ -10,11 +10,11 @@
#include <string>
#include <vpu/myriad_config.hpp>
#include <vpu/configuration.hpp>
#include <vpu/private_plugin_config.hpp>
#include <vpu/parsed_config_base.hpp>
#include <vpu/graph_transformer.hpp>
#include <vpu/utils/perf_report.hpp>
#include <vpu/utils/logger.hpp>
#include <vpu/utils/enums.hpp>
@ -23,6 +23,12 @@ namespace vpu {
class ParsedConfig : public ParsedConfigBase {
public:
ParsedConfig() = default;
ParsedConfig(const ParsedConfig&) = default;
ParsedConfig& operator=(const ParsedConfig&) = default;
ParsedConfig(ParsedConfig&&) = delete;
ParsedConfig& operator=(ParsedConfig&&) = delete;
const std::string& compilerLogFilePath() const {
return _compilerLogFilePath;
}
@ -31,6 +37,10 @@ public:
return _compileConfig;
}
CompilationConfig& compileConfig() {
return _compileConfig;
}
bool printReceiveTensorTime() const {
return _printReceiveTensorTime;
}

View File

@ -25,10 +25,6 @@ VPU_DECLARE_ENUM(ConfigMode,
class ParsedConfigBase {
public:
LogLevel logLevel() const {
return _logLevel;
}
bool exclusiveAsyncRequests() const {
return _exclusiveAsyncRequests;
}
@ -37,11 +33,9 @@ public:
ParsedConfigBase();
virtual ~ParsedConfigBase();
void update(
const std::map<std::string, std::string>& config,
ConfigMode mode = ConfigMode::Any);
protected:
void update(const std::map<std::string, std::string>& config, ConfigMode mode = ConfigMode::Any);
virtual const std::unordered_set<std::string>& getCompileOptions() const;
virtual const std::unordered_set<std::string>& getRunTimeOptions() const;
virtual const std::unordered_set<std::string>& getDeprecatedOptions() const;
@ -130,7 +124,6 @@ protected:
Logger::Ptr _log;
private:
LogLevel _logLevel = LogLevel::None;
bool _exclusiveAsyncRequests = false;
};

View File

@ -0,0 +1,40 @@
// Copyright (C) 2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <vector>
#include <algorithm>
#include "error.hpp"
namespace vpu {
template<class Key, class Value, template<class...> class Map>
inline std::vector<Key> getKeys(const Map<Key, Value>& map) {
auto keys = std::vector<Key>{};
keys.reserve(map.size());
std::transform(map.cbegin(), map.cend(), std::back_inserter(keys), [](const std::pair<Key, Value>& entry) { return entry.first; });
return keys;
}
template<class Key, class Value, template<class...> class Map>
inline std::vector<Value> getValues(const Map<Key, Value>& map) {
auto values = std::vector<Value>{};
values.reserve(map.size());
std::transform(map.cbegin(), map.cend(), std::back_inserter(values), [](const std::pair<Key, Value>& entry) { return entry.second; });
return values;
}
template<class Key, class Value, template<class...> class Map>
inline Map<Value, Key> inverse(const Map<Key, Value>& map) {
auto inverted = Map<Value, Key>{};
for (const auto& entry : map) {
const auto& insertion = inverted.emplace(entry.second, entry.first);
VPU_THROW_UNLESS(insertion.second, "Could not invert map {} due to duplicated value \"{}\"", map, entry.second);
}
return inverted;
}
} // namespace vpu

View File

@ -29,6 +29,11 @@ public:
using VPUException::VPUException;
};
class UnsupportedConfigurationOptionException : public VPUException {
public:
using VPUException::VPUException;
};
template <class Exception, typename... Args>
void throwFormat(const char* fileName, int lineNumber, const char* messageFormat, Args&&... args) {
IE_THROW(GeneralError) << '\n' << fileName << ':' << lineNumber << ' '
@ -47,13 +52,20 @@ void throwFormat(const char* fileName, int lineNumber, const char* messageFormat
} \
} while (false)
#define VPU_THROW_UNSUPPORTED_UNLESS(condition, ...) \
#define VPU_THROW_UNSUPPORTED_LAYER_UNLESS(condition, ...) \
do { \
if (!(condition)) { \
::vpu::details::throwFormat<::vpu::details::UnsupportedLayerException>(__FILE__, __LINE__, __VA_ARGS__); \
} \
} while (false)
#define VPU_THROW_UNSUPPORTED_OPTION_UNLESS(condition, ...) \
do { \
if (!(condition)) { \
::vpu::details::throwFormat<::vpu::details::UnsupportedConfigurationOptionException>(__FILE__, __LINE__, __VA_ARGS__); \
} \
} while (false)
#ifdef NDEBUG
# define VPU_INTERNAL_CHECK(condition, ...) \
do { \

View File

@ -0,0 +1,21 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "vpu/utils/enums.hpp"
namespace vpu {
VPU_DECLARE_ENUM(LogLevel,
None,
Fatal, /* used for very severe error events that will most probably cause the application to terminate */
Error, /* reporting events which are not expected during normal execution, containing probable reason */
Warning, /* indicating events which are not usual and might lead to errors later */
Info, /* short enough messages about ongoing activity in the process */
Debug, /* more fine-grained messages with references to particular data and explanations */
Trace /* involved and detailed information about execution, helps to trace the execution flow, produces huge output */
)
} // namespace vpu

View File

@ -13,6 +13,7 @@
#include <vpu/utils/enums.hpp>
#include <vpu/utils/auto_scope.hpp>
#include <vpu/utils/io.hpp>
#include <vpu/utils/log_level.hpp>
namespace vpu {
@ -39,20 +40,6 @@ OutputStream::Ptr fileOutput(const std::string& fileName);
OutputStream::Ptr defaultOutput(const std::string& fileName = std::string());
//
// Logger
//
VPU_DECLARE_ENUM(LogLevel,
None,
Fatal, /* used for very severe error events that will most probably cause the application to terminate */
Error, /* reporting events which are not expected during normal execution, containing probable reason */
Warning, /* indicating events which are not usual and might lead to errors later */
Info, /* short enough messages about ongoing activity in the process */
Debug, /* more fine-grained messages with references to particular data and explanations */
Trace /* involved and detailed information about execution, helps to trace the execution flow, produces huge output */
)
class Logger final {
public:
using Ptr = std::shared_ptr<Logger>;

View File

@ -0,0 +1,10 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include <vpu/configuration/as_parameter_enabler.hpp>
InferenceEngine::Parameter AsParameterEnabler::asParameter(const std::string& value) {
return {value};
}

View File

@ -0,0 +1,45 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "vpu/private_plugin_config.hpp"
#include "vpu/utils/containers.hpp"
#include "vpu/configuration/options/copy_optimization.hpp"
#include "vpu/configuration/switch_converters.hpp"
#include "vpu/configuration/plugin_configuration.hpp"
namespace vpu {
void CopyOptimizationOption::validate(const std::string& value) {
const auto& converters = string2switch();
VPU_THROW_UNLESS(converters.count(value) != 0, R"(unexpected copy optimization option value "{}", only {} are supported)", value, getKeys(converters));
}
void CopyOptimizationOption::validate(const PluginConfiguration& configuration) {
validate(configuration[key()]);
}
std::string CopyOptimizationOption::key() {
return InferenceEngine::MYRIAD_COPY_OPTIMIZATION;
}
details::Access CopyOptimizationOption::access() {
return details::Access::Private;
}
details::Category CopyOptimizationOption::category() {
return details::Category::CompileTime;
}
std::string CopyOptimizationOption::defaultValue() {
return InferenceEngine::PluginConfigParams::YES;
}
CopyOptimizationOption::value_type CopyOptimizationOption::parse(const std::string& value) {
const auto& converters = string2switch();
VPU_THROW_UNSUPPORTED_OPTION_UNLESS(converters.count(value) != 0, R"(unexpected copy optimization option value "{}", only {} are supported)",
value, getKeys(converters));
return converters.at(value);
}
} // namespace vpu

View File

@ -0,0 +1,64 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "vpu/configuration/options/log_level.hpp"
#include "vpu/utils/log_level.hpp"
#include "vpu/utils/containers.hpp"
#include "vpu/configuration/plugin_configuration.hpp"
#include "ie_plugin_config.hpp"
#include <unordered_map>
namespace vpu {
namespace {
const std::unordered_map<std::string, LogLevel>& string2level() {
static const std::unordered_map<std::string, LogLevel> converters = {
{CONFIG_VALUE(LOG_NONE), LogLevel::None},
{CONFIG_VALUE(LOG_ERROR), LogLevel::Error},
{CONFIG_VALUE(LOG_WARNING), LogLevel::Warning},
{CONFIG_VALUE(LOG_INFO), LogLevel::Info},
{CONFIG_VALUE(LOG_DEBUG), LogLevel::Debug},
{CONFIG_VALUE(LOG_TRACE), LogLevel::Trace},
};
return converters;
}
} // namespace
void LogLevelOption::validate(const std::string& value) {
const auto& converters = string2level();
VPU_THROW_UNLESS(converters.count(value) != 0, R"(unexpected log level option value "{}", only {} are supported)", value, getKeys(converters));
}
void LogLevelOption::validate(const PluginConfiguration& configuration) {
validate(configuration[key()]);
}
std::string LogLevelOption::key() {
return InferenceEngine::PluginConfigParams::KEY_LOG_LEVEL;
}
details::Access LogLevelOption::access() {
return details::Access::Public;
}
details::Category LogLevelOption::category() {
return details::Category::CompileTime;
}
std::string LogLevelOption::defaultValue() {
return InferenceEngine::PluginConfigParams::LOG_NONE;
}
LogLevelOption::value_type LogLevelOption::parse(const std::string& value) {
const auto& converters = string2level();
VPU_THROW_UNSUPPORTED_OPTION_UNLESS(converters.count(value) != 0, R"(unexpected log level option value "{}", only {} are supported)",
value, getKeys(converters));
return converters.at(value);
}
} // namespace vpu

View File

@ -0,0 +1,114 @@
// Copyright (C) 2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "vpu/utils/error.hpp"
#include "vpu/configuration/plugin_configuration.hpp"
#include "ie_plugin_config.hpp"
namespace vpu {
namespace details {
ConfigurationOptionConcept& ConfigurationEntry::get() {
return *m_value;
}
const ConfigurationOptionConcept& ConfigurationEntry::get() const {
return *m_value;
}
bool ConfigurationEntry::isPrivate() const {
return m_access == Access::Private;
}
bool ConfigurationEntry::isDeprecated() const {
return m_deprecation == Deprecation::On;
}
Category ConfigurationEntry::getCategory() const {
return m_category;
}
std::string ConfigurationEntry::key() const {
return m_value->key();
}
} // namespace details
PluginConfiguration::PluginConfiguration() : logger(std::make_shared<Logger>("Configuration", LogLevel::Warning, consoleOutput())) {}
std::unordered_set<std::string> PluginConfiguration::getPublicKeys() const {
auto publicKeys = std::unordered_set<std::string>{};
for (const auto& entry : concepts) {
const auto& key = entry.first;
const auto& option = entry.second;
if (option.isPrivate()) {
continue;
}
publicKeys.insert(key);
}
return publicKeys;
}
bool PluginConfiguration::supports(const std::string& key) const {
return concepts.count(key) != 0;
}
void PluginConfiguration::from(const std::map<std::string, std::string>& config) {
create(config);
}
void PluginConfiguration::fromAtRuntime(const std::map<std::string, std::string>& config) {
create(config, Mode::RunTime);
}
void PluginConfiguration::validate() const {
for (const auto& option : concepts) {
option.second.get().validate(*this);
}
}
void PluginConfiguration::create(const std::map<std::string, std::string>& config, Mode mode) {
for (const auto& entry : config) {
const auto& key = entry.first;
validate(key);
const auto& optionConcept = concepts.at(key);
if (mode == Mode::RunTime && optionConcept.getCategory() == details::Category::CompileTime) {
logger->warning("Configuration option \"{}\" is used after network is loaded. Its value is going to be ignored.", key);
continue;
}
const auto& value = entry.second;
set(key, value);
}
}
InferenceEngine::Parameter PluginConfiguration::asParameter(const std::string& key) const {
const auto& value = operator[](key);
return concepts.at(key).get().asParameter(value);
}
void PluginConfiguration::validate(const std::string& key) const {
VPU_THROW_UNSUPPORTED_OPTION_UNLESS(supports(key), "Encountered an unsupported key {}, supported keys are {}", key, getPublicKeys());
if (concepts.at(key).isDeprecated()) {
logger->warning("Encountered deprecated option {} usage, consider replacing it with {} option", key, concepts.at(key).key());
}
}
const std::string& PluginConfiguration::operator[](const std::string& key) const {
validate(key);
return values.at(concepts.at(key).key());
}
void PluginConfiguration::set(const std::string& key, const std::string& value) {
validate(key);
const auto& optionConcept = concepts.at(key).get();
optionConcept.validate(value);
values[optionConcept.key()] = value;
}
} // namespace vpu

View File

@ -0,0 +1,25 @@
// Copyright (C) 2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "vpu/utils/containers.hpp"
#include "vpu/configuration/switch_converters.hpp"
#include "ie_plugin_config.hpp"
namespace vpu {
const std::unordered_map<std::string, bool>& string2switch() {
static const std::unordered_map<std::string, bool> converters = {
{CONFIG_VALUE(NO), false},
{CONFIG_VALUE(YES), true}
};
return converters;
}
const std::unordered_map<bool, std::string>& switch2string() {
static const auto converters = inverse(string2switch());
return converters;
}
} // namespace vpu

View File

@ -169,7 +169,6 @@ void ParsedConfig::parse(const std::map<std::string, std::string>& config) {
setOption(_compileConfig.dumpAllPasses, switches, config, ie::MYRIAD_DUMP_ALL_PASSES);
setOption(_compileConfig.detectBatch, switches, config, ie::MYRIAD_DETECT_NETWORK_BATCH);
setOption(_compileConfig.copyOptimization, switches, config, ie::MYRIAD_COPY_OPTIMIZATION);
setOption(_compileConfig.packDataInCmx, switches, config, ie::MYRIAD_PACK_DATA_IN_CMX);
setOption(_compileConfig.ignoreUnknownLayers, switches, config, ie::MYRIAD_IGNORE_UNKNOWN_LAYERS);
setOption(_compileConfig.hwOptimization, switches, config, ie::MYRIAD_ENABLE_HW_ACCELERATION);

View File

@ -59,13 +59,7 @@ void ParsedConfigBase::update(
}
const std::unordered_set<std::string>& ParsedConfigBase::getCompileOptions() const {
IE_SUPPRESS_DEPRECATED_START
static const std::unordered_set<std::string> options = {
CONFIG_KEY(LOG_LEVEL),
VPU_CONFIG_KEY(LOG_LEVEL)
};
IE_SUPPRESS_DEPRECATED_END
static const std::unordered_set<std::string> options;
return options;
}
@ -73,8 +67,6 @@ const std::unordered_set<std::string>& ParsedConfigBase::getRunTimeOptions() con
IE_SUPPRESS_DEPRECATED_START
static const std::unordered_set<std::string> options = {
CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS),
CONFIG_KEY(LOG_LEVEL),
VPU_CONFIG_KEY(LOG_LEVEL)
};
IE_SUPPRESS_DEPRECATED_END
@ -82,37 +74,12 @@ IE_SUPPRESS_DEPRECATED_END
}
const std::unordered_set<std::string>& ParsedConfigBase::getDeprecatedOptions() const {
IE_SUPPRESS_DEPRECATED_START
static const std::unordered_set<std::string> options = {
VPU_CONFIG_KEY(LOG_LEVEL)
};
IE_SUPPRESS_DEPRECATED_END
static const std::unordered_set<std::string> options;
return options;
}
void ParsedConfigBase::parse(const std::map<std::string, std::string>& config) {
static const std::unordered_map<std::string, LogLevel> logLevels = {
{ CONFIG_VALUE(LOG_NONE), LogLevel::None },
{ CONFIG_VALUE(LOG_ERROR), LogLevel::Error },
{ CONFIG_VALUE(LOG_WARNING), LogLevel::Warning },
{ CONFIG_VALUE(LOG_INFO), LogLevel::Info },
{ CONFIG_VALUE(LOG_DEBUG), LogLevel::Debug },
{ CONFIG_VALUE(LOG_TRACE), LogLevel::Trace }
};
setOption(_logLevel, logLevels, config, CONFIG_KEY(LOG_LEVEL));
setOption(_exclusiveAsyncRequests, switches, config, CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS));
IE_SUPPRESS_DEPRECATED_START
setOption(_logLevel, logLevels, config, VPU_CONFIG_KEY(LOG_LEVEL));
IE_SUPPRESS_DEPRECATED_END
#ifndef NDEBUG
if (const auto envVar = std::getenv("IE_VPU_LOG_LEVEL")) {
_logLevel = logLevels.at(envVar);
}
#endif
}
std::unordered_set<std::string> ParsedConfigBase::merge(

View File

@ -48,8 +48,13 @@ function(add_graph_transformer_target TARGET_NAME STATIC_IE)
target_link_libraries(${TARGET_NAME} PUBLIC pugixml vpu_common_lib)
endif()
target_link_libraries(${TARGET_NAME} PUBLIC ${NGRAPH_LIBRARIES}
PRIVATE openvino::itt)
target_link_libraries(${TARGET_NAME}
PUBLIC
${NGRAPH_LIBRARIES}
PRIVATE
openvino::itt
mvnc # TODO: remove once all options are migrated
)
if(WIN32)
target_compile_definitions(${TARGET_NAME} PRIVATE NOMINMAX)

View File

@ -8,28 +8,29 @@
#include <vpu/model/model.hpp>
#include <vpu/utils/logger.hpp>
#include <vpu/utils/profiling.hpp>
#include <mvnc.h>
namespace vpu {
struct DeviceResources {
static int numShaves(const Platform& platform);
static int numSlices(const Platform& platform);
static int numShaves(const ncDevicePlatform_t& platform);
static int numSlices(const ncDevicePlatform_t& platform);
static int numStreams();
};
struct DefaultAllocation {
static int numStreams(const Platform& platform, const CompilationConfig& configuration);
static int numSlices(const Platform& platform, int numStreams);
static int numShaves(const Platform& platform, int numStreams, int numSlices);
static int numStreams(const ncDevicePlatform_t& platform, const PluginConfiguration& configuration);
static int numSlices(const ncDevicePlatform_t& platform, int numStreams);
static int numShaves(const ncDevicePlatform_t& platform, int numStreams, int numSlices);
static int tilingCMXLimit(int numSlices);
};
struct CompileEnv final {
public:
Platform platform;
ncDevicePlatform_t platform;
Resources resources;
CompilationConfig config;
PluginConfiguration config;
Logger::Ptr log;
@ -49,14 +50,14 @@ public:
static const CompileEnv* getOrNull();
static void init(
Platform platform,
const CompilationConfig& config,
const Logger::Ptr& log);
static void updateConfig(const CompilationConfig& config);
ncDevicePlatform_t platform,
const PluginConfiguration& config,
const Logger::Ptr& log);
static void updateConfig(const PluginConfiguration& config);
static void free();
private:
explicit CompileEnv(Platform platform);
explicit CompileEnv(ncDevicePlatform_t platform);
};
} // namespace vpu

View File

@ -21,108 +21,14 @@
#include <vpu/utils/perf_report.hpp>
#include <vpu/utils/logger.hpp>
#include <vpu/utils/optional.hpp>
#include <vpu/configuration/plugin_configuration.hpp>
#include "mvnc.h"
namespace vpu {
namespace ie = InferenceEngine;
//
// CompilationConfig
//
VPU_DECLARE_ENUM(Platform,
MYRIAD_2 = 2450,
MYRIAD_X = 2480,
)
struct CompilationConfig final {
//
// Compilation options
//
int numSHAVEs = -1;
int numCMXSlices = -1;
int numExecutors = -1;
int tilingCMXLimitKB = -1;
bool hwOptimization = true;
bool hwExtraSplit = false;
std::string irWithVpuScalesDir;
std::string customLayers;
bool detectBatch = true;
Optional<bool> copyOptimization;
Optional<bool> injectSwOps;
Optional<bool> packDataInCmx;
bool mergeHwPoolToConv = true;
bool hwDilation = false;
bool forceDeprecatedCnnConversion = false;
bool enableEarlyEltwiseReLUFusion = true;
std::map<std::string, std::vector<int>> ioStrides;
//
// Debug options
//
ie::details::caseless_set<std::string> hwWhiteList;
ie::details::caseless_set<std::string> hwBlackList;
bool hwDisabled(const std::string& layerName) const {
if (!hwWhiteList.empty()) {
return hwWhiteList.count(layerName) == 0;
}
if (!hwBlackList.empty()) {
return hwBlackList.count(layerName) != 0;
}
return false;
}
ie::details::caseless_set<std::string> noneLayers;
bool skipAllLayers() const {
if (noneLayers.size() == 1) {
const auto& val = *noneLayers.begin();
return val == "*";
}
return false;
}
bool skipLayerType(const std::string& layerType) const {
return noneLayers.count(layerType) != 0;
}
bool ignoreUnknownLayers = false;
std::string dumpInternalGraphFileName;
std::string dumpInternalGraphDirectory;
bool dumpAllPasses;
bool disableReorder = false; // TODO: rename to enableReorder and switch logic.
bool disableConvertStages = false;
bool enablePermuteMerging = true;
bool enableReplWithSCRelu = false;
bool enableReplaceWithReduceMean = true;
bool enableTensorIteratorUnrolling = false;
bool forcePureTensorIterator = false;
bool enableMemoryTypesAnnotation = false;
bool enableWeightsAnalysis = true;
bool checkPreprocessingInsideModel = true;
bool enableCustomReshapeParam = false;
//
// Deprecated options
//
float inputScale = 1.0f;
float inputBias = 0.0f;
};
//
// DataInfo
//
@ -165,17 +71,17 @@ struct CompiledGraph final {
// compileNetwork
//
CompiledGraph::Ptr compileNetwork(const ie::CNNNetwork& network, Platform platform, const CompilationConfig& config, const Logger::Ptr& log,
const ie::ICore* core);
CompiledGraph::Ptr compileNetwork(const ie::CNNNetwork& network, ncDevicePlatform_t platform, const PluginConfiguration& config, const Logger::Ptr& log,
const ie::ICore* core);
CompiledGraph::Ptr compileSubNetwork(const ie::CNNNetwork& network, const CompilationConfig& subConfig, const ie::ICore* core);
CompiledGraph::Ptr compileSubNetwork(const ie::CNNNetwork& network, const PluginConfiguration& subConfig, const ie::ICore* core);
//
// getSupportedLayers
//
std::set<std::string> getSupportedLayers(const ie::CNNNetwork& network, Platform platform, const CompilationConfig& config, const Logger::Ptr& log,
const ie::ICore* core);
std::set<std::string> getSupportedLayers(const ie::CNNNetwork& network, ncDevicePlatform_t platform, const PluginConfiguration& config, const Logger::Ptr& log,
const ie::ICore* core);
//
// Blob version and checks

View File

@ -12,8 +12,8 @@ namespace vpu {
CompiledGraph::Ptr compileModel(
const Model& model,
Platform platform,
const CompilationConfig& config,
ncDevicePlatform_t platform,
const PluginConfiguration& config,
const Logger::Ptr& log);
} // namespace vpu

View File

@ -85,12 +85,12 @@ void BackEnd::dumpModel(
std::string fileName;
if (!env.config.dumpInternalGraphFileName.empty()) {
fileName = fileNameNoExt(env.config.dumpInternalGraphFileName);
} else if (!env.config.dumpInternalGraphDirectory.empty()) {
if (!env.config.compileConfig().dumpInternalGraphFileName.empty()) {
fileName = fileNameNoExt(env.config.compileConfig().dumpInternalGraphFileName);
} else if (!env.config.compileConfig().dumpInternalGraphDirectory.empty()) {
fileName = formatString(
"%s/vpu_graph_%f%f%i_%s",
env.config.dumpInternalGraphDirectory,
env.config.compileConfig().dumpInternalGraphDirectory,
std::setw(2), std::setfill('0'),
model->attrs().get<int>("index"),
replaceBadCharacters(model->name()));
@ -99,7 +99,7 @@ void BackEnd::dumpModel(
}
if (!postfix.empty()) {
if (!env.config.dumpAllPasses) {
if (!env.config.compileConfig().dumpAllPasses) {
return;
}

View File

@ -29,7 +29,7 @@ void FrontEnd::detectNetworkBatch(
using PrecisionsMap = std::map<std::string, ie::Precision>;
const auto& env = CompileEnv::get();
if (!env.config.detectBatch) {
if (!env.config.compileConfig().detectBatch) {
// skip batch extraction step and go as is
return;
}

View File

@ -436,7 +436,7 @@ void FrontEnd::processTrivialCases(const Model& model) {
void FrontEnd::defaultOnUnsupportedLayerCallback(const Model& model, const ie::CNNLayerPtr& layer, const DataVector& inputs, const DataVector& outputs,
const std::string& extraMessage) {
const auto& env = CompileEnv::get();
VPU_THROW_UNSUPPORTED_UNLESS(env.config.ignoreUnknownLayers, "Failed to compile layer \"%v\": %v", layer->name, extraMessage);
VPU_THROW_UNSUPPORTED_LAYER_UNLESS(env.config.compileConfig().ignoreUnknownLayers, "Failed to compile layer \"%v\": %v", layer->name, extraMessage);
_stageBuilder->addNoneStage(model, layer->name, layer, inputs, outputs);
}
@ -466,15 +466,15 @@ ModelPtr FrontEnd::runCommonPasses(ie::CNNNetwork network,
// Parse custom layers
//
if (!env.config.customLayers.empty()) {
env.log->trace("Parse custom layers : %s", env.config.customLayers);
if (!env.config.compileConfig().customLayers.empty()) {
env.log->trace("Parse custom layers : %s", env.config.compileConfig().customLayers);
VPU_LOGGER_SECTION(env.log);
if (env.platform != Platform::MYRIAD_X) {
if (env.platform != ncDevicePlatform_t::NC_MYRIAD_X) {
VPU_THROW_FORMAT("Custom layers are not supported for %v platforms", env.platform);
}
_customLayers = CustomLayer::loadFromFile(env.config.customLayers);
_customLayers = CustomLayer::loadFromFile(env.config.compileConfig().customLayers);
}
//
@ -494,7 +494,7 @@ ModelPtr FrontEnd::runCommonPasses(ie::CNNNetwork network,
env.log->trace("Update IE Network");
VPU_LOGGER_SECTION(env.log);
if (network.getFunction() && env.config.forceDeprecatedCnnConversion) {
if (network.getFunction() && env.config.compileConfig().forceDeprecatedCnnConversion) {
network = convertNetwork(network);
}
@ -545,7 +545,7 @@ ModelPtr FrontEnd::runCommonPasses(ie::CNNNetwork network,
processTrivialCases(model);
if (!CompileEnv::get().config.disableConvertStages) {
if (!CompileEnv::get().config.compileConfig().disableConvertStages) {
addDataTypeConvertStages(model);
}
@ -567,7 +567,7 @@ ModelPtr FrontEnd::runCommonPasses(ie::CNNNetwork network,
getInputAndOutputData(model, layer, inputs, outputs);
if (env.config.skipAllLayers() || env.config.skipLayerType(layer->type)) {
if (env.config.compileConfig().skipAllLayers() || env.config.compileConfig().skipLayerType(layer->type)) {
_stageBuilder->addNoneStage(model, layer->name, layer, inputs, outputs);
supportedLayer(layer);
continue;

View File

@ -22,7 +22,7 @@ void FrontEnd::addDataTypeConvertStages(const Model& model) {
env.log->trace("Add Data type conversion stages");
VPU_LOGGER_SECTION(env.log);
const bool hasScaleBias = env.config.inputScale != 1.0f || env.config.inputBias != 0.0f;
const bool hasScaleBias = env.config.compileConfig().inputScale != 1.0f || env.config.compileConfig().inputBias != 0.0f;
for (const auto& input : model->datas()) {
if (input->usage() != DataUsage::Input) {
@ -38,11 +38,11 @@ void FrontEnd::addDataTypeConvertStages(const Model& model) {
env.log->trace("Apply deprecated scale/bias parameters");
std::ostringstream postfix;
if (env.config.inputScale != 1.0f) {
postfix << "@SCALE=" << InferenceEngine::CNNLayer::ie_serialize_float(env.config.inputScale);
if (env.config.compileConfig().inputScale != 1.0f) {
postfix << "@SCALE=" << InferenceEngine::CNNLayer::ie_serialize_float(env.config.compileConfig().inputScale);
}
if (env.config.inputBias != 0.0f) {
postfix << "@BIAS=" << InferenceEngine::CNNLayer::ie_serialize_float(env.config.inputBias);
if (env.config.compileConfig().inputBias != 0.0f) {
postfix << "@BIAS=" << InferenceEngine::CNNLayer::ie_serialize_float(env.config.compileConfig().inputBias);
}
const auto scaledInput = model->duplicateData(
@ -55,9 +55,9 @@ void FrontEnd::addDataTypeConvertStages(const Model& model) {
model,
scaledInput->name(),
nullptr,
env.config.inputScale,
env.config.compileConfig().inputScale,
1.0f,
env.config.inputBias,
env.config.compileConfig().inputBias,
input,
scaledInput);
}
@ -89,8 +89,8 @@ void FrontEnd::addDataTypeConvertStages(const Model& model) {
inputFP16->name(),
input,
inputFP16,
env.config.inputScale,
env.config.inputBias);
env.config.compileConfig().inputScale,
env.config.compileConfig().inputBias);
break;
}

View File

@ -25,8 +25,8 @@ void FrontEnd::parseInputAndOutputData(const Model& model) {
VPU_LOGGER_SECTION(env.log);
const auto parseIOStrides = [&env](const std::string& name, const Data& data) {
const auto& match = env.config.ioStrides.find(name);
if (match == env.config.ioStrides.end()) {
const auto& match = env.config.compileConfig().ioStrides.find(name);
if (match == env.config.compileConfig().ioStrides.end()) {
return;
}

View File

@ -21,7 +21,7 @@ void FrontEnd::unrollLoops(ie::CNNNetwork& network) {
env.log->trace("Unroll TensorIterator loops");
VPU_LOGGER_SECTION(env.log);
if (!env.config.irWithVpuScalesDir.empty()) {
if (!env.config.compileConfig().irWithVpuScalesDir.empty()) {
// TODO: Scale dumps does not work with IR, which contain Tensor Iterator layers, because we cannot serialize them. #-23429
for (auto iterator = ie::details::CNNNetworkIterator(network); iterator != ie::details::CNNNetworkIterator(); ++iterator) {
const auto& layer = *iterator;
@ -30,11 +30,11 @@ void FrontEnd::unrollLoops(ie::CNNNetwork& network) {
}
}
if (env.config.forcePureTensorIterator) {
if (env.config.compileConfig().forcePureTensorIterator) {
return;
}
if (env.config.enableTensorIteratorUnrolling) {
if (env.config.compileConfig().enableTensorIteratorUnrolling) {
ie::NetPass::UnrollTI(network);
} else {
// Try to convert network to a RNN sequence due to performance reasons

View File

@ -42,6 +42,7 @@
#include <vpu/utils/auto_scope.hpp>
#include <vpu/utils/dot_io.hpp>
#include <vpu/utils/file_system.hpp>
#include <mvnc.h>
namespace vpu {
@ -55,7 +56,7 @@ thread_local CompileEnv* g_compileEnv = nullptr;
} // namespace
CompileEnv::CompileEnv(Platform platform) : platform(platform) {}
CompileEnv::CompileEnv(ncDevicePlatform_t platform) : platform(platform) {}
const CompileEnv& CompileEnv::get() {
IE_ASSERT(g_compileEnv != nullptr);
@ -70,7 +71,7 @@ const CompileEnv* CompileEnv::getOrNull() {
return g_compileEnv;
}
void CompileEnv::init(Platform platform, const CompilationConfig& config, const Logger::Ptr& log) {
void CompileEnv::init(ncDevicePlatform_t platform, const PluginConfiguration& config, const Logger::Ptr& log) {
g_compileEnv = new CompileEnv(platform);
g_compileEnv->config = config;
g_compileEnv->log = log;
@ -79,31 +80,37 @@ void CompileEnv::init(Platform platform, const CompilationConfig& config, const
g_compileEnv->profile.setLogger(log);
#endif
if (platform == Platform::MYRIAD_2) {
g_compileEnv->config.hwOptimization = false;
if (platform == ncDevicePlatform_t::NC_MYRIAD_2) {
g_compileEnv->config.compileConfig().hwOptimization = false;
}
VPU_THROW_UNLESS(g_compileEnv->config.numSHAVEs <= g_compileEnv->config.numCMXSlices,
VPU_THROW_UNLESS(g_compileEnv->config.compileConfig().numSHAVEs <= g_compileEnv->config.compileConfig().numCMXSlices,
R"(Value of configuration option ("{}") must be not greater than value of configuration option ("{}"), but {} > {} are provided)",
ie::MYRIAD_NUMBER_OF_SHAVES, ie::MYRIAD_NUMBER_OF_CMX_SLICES, config.numSHAVEs, config.numCMXSlices);
ie::MYRIAD_NUMBER_OF_SHAVES, ie::MYRIAD_NUMBER_OF_CMX_SLICES, config.compileConfig().numSHAVEs, config.compileConfig().numCMXSlices);
const auto numExecutors = config.numExecutors != -1 ? config.numExecutors : DefaultAllocation::numStreams(platform, config);
const auto numExecutors = config.compileConfig().numExecutors != -1 ? config.compileConfig().numExecutors : DefaultAllocation::numStreams(platform, config);
VPU_THROW_UNLESS(numExecutors >= 1 && numExecutors <= DeviceResources::numStreams(),
R"(Value of configuration option ("{}") must be in the range [{}, {}], actual is "{}")",
ie::MYRIAD_THROUGHPUT_STREAMS, 1, DeviceResources::numStreams(), numExecutors);
const auto numSlices = config.numCMXSlices != -1 ? config.numCMXSlices : DefaultAllocation::numSlices(platform, numExecutors);
const auto numSlices = config.compileConfig().numCMXSlices != -1
? config.compileConfig().numCMXSlices
: DefaultAllocation::numSlices(platform, numExecutors);
VPU_THROW_UNLESS(numSlices >= 1 && numSlices <= DeviceResources::numSlices(platform),
R"(Value of configuration option ("{}") must be in the range [{}, {}], actual is "{}")",
ie::MYRIAD_NUMBER_OF_CMX_SLICES, 1, DeviceResources::numSlices(platform), numSlices);
int defaultCmxLimit = DefaultAllocation::tilingCMXLimit(numSlices);
const auto tilingCMXLimit = config.tilingCMXLimitKB != -1 ? std::min(config.tilingCMXLimitKB * 1024, defaultCmxLimit) : defaultCmxLimit;
const auto tilingCMXLimit = config.compileConfig().tilingCMXLimitKB != -1
? std::min(config.compileConfig().tilingCMXLimitKB * 1024, defaultCmxLimit)
: defaultCmxLimit;
VPU_THROW_UNLESS(tilingCMXLimit >= 0,
R"(Value of configuration option ("{}") must be greater than {}, actual is "{}")",
ie::MYRIAD_TILING_CMX_LIMIT_KB, 0, tilingCMXLimit);
const auto numShaves = config.numSHAVEs != -1 ? config.numSHAVEs : DefaultAllocation::numShaves(platform, numExecutors, numSlices);
const auto numShaves = config.compileConfig().numSHAVEs != -1
? config.compileConfig().numSHAVEs
: DefaultAllocation::numShaves(platform, numExecutors, numSlices);
VPU_THROW_UNLESS(numShaves >= 1 && numShaves <= DeviceResources::numShaves(platform),
R"(Value of configuration option ("{}") must be in the range [{}, {}], actual is "{}")",
ie::MYRIAD_NUMBER_OF_SHAVES, 1, DeviceResources::numShaves(platform), numShaves);
@ -123,7 +130,7 @@ void CompileEnv::init(Platform platform, const CompilationConfig& config, const
g_compileEnv->initialized = true;
}
void CompileEnv::updateConfig(const CompilationConfig& config) {
void CompileEnv::updateConfig(const PluginConfiguration& config) {
IE_ASSERT(g_compileEnv != nullptr);
IE_ASSERT(g_compileEnv->initialized);
@ -165,9 +172,9 @@ CompiledGraph::Ptr compileImpl(const ie::CNNNetwork& network, const ie::ICore* c
middleEnd->run(model);
if (!env.config.irWithVpuScalesDir.empty()) {
network.serialize(env.config.irWithVpuScalesDir + "/" + network.getName() + "_scales.xml",
env.config.irWithVpuScalesDir + "/" + network.getName() + "_scales.bin");
if (!env.config.compileConfig().irWithVpuScalesDir.empty()) {
network.serialize(env.config.compileConfig().irWithVpuScalesDir + "/" + network.getName() + "_scales.xml",
env.config.compileConfig().irWithVpuScalesDir + "/" + network.getName() + "_scales.bin");
}
return backEnd->build(model, frontEnd->origLayers());
@ -191,8 +198,8 @@ CompiledGraph::Ptr compileImpl(const Model& model) {
} // namespace
CompiledGraph::Ptr compileNetwork(const ie::CNNNetwork& network, Platform platform, const CompilationConfig& config, const Logger::Ptr& log,
const ie::ICore* core) {
CompiledGraph::Ptr compileNetwork(const ie::CNNNetwork& network, ncDevicePlatform_t platform, const PluginConfiguration& config, const Logger::Ptr& log,
const ie::ICore* core) {
CompileEnv::init(platform, config, log);
AutoScope autoDeinit([] {
CompileEnv::free();
@ -205,8 +212,8 @@ CompiledGraph::Ptr compileNetwork(const ie::CNNNetwork& network, Platform platfo
CompiledGraph::Ptr compileModel(
const Model& model,
Platform platform,
const CompilationConfig& config,
ncDevicePlatform_t platform,
const PluginConfiguration& config,
const Logger::Ptr& log) {
CompileEnv::init(platform, config, log);
AutoScope autoDeinit([] {
@ -218,7 +225,7 @@ CompiledGraph::Ptr compileModel(
return compileImpl(model);
}
CompiledGraph::Ptr compileSubNetwork(const ie::CNNNetwork& network, const CompilationConfig& subConfig, const ie::ICore* core) {
CompiledGraph::Ptr compileSubNetwork(const ie::CNNNetwork& network, const PluginConfiguration& subConfig, const ie::ICore* core) {
VPU_PROFILE(compileSubNetwork);
const auto& env = CompileEnv::get();
@ -238,11 +245,11 @@ CompiledGraph::Ptr compileSubNetwork(const ie::CNNNetwork& network, const Compil
//
std::set<std::string> getSupportedLayers(
const ie::CNNNetwork& network,
Platform platform,
const CompilationConfig& config,
const Logger::Ptr& log,
const ie::ICore* core) {
const ie::CNNNetwork& network,
ncDevicePlatform_t platform,
const PluginConfiguration& config,
const Logger::Ptr& log,
const ie::ICore* core) {
CompileEnv::init(platform, config, log);
AutoScope autoDeinit([] {
CompileEnv::free();
@ -255,28 +262,28 @@ std::set<std::string> getSupportedLayers(
return frontEnd->checkSupportedLayers(network);
}
int DeviceResources::numShaves(const Platform& platform) {
return platform == Platform::MYRIAD_2 ? 12 : 16;
int DeviceResources::numShaves(const ncDevicePlatform_t& platform) {
return platform == ncDevicePlatform_t::NC_MYRIAD_2 ? 12 : 16;
}
int DeviceResources::numSlices(const Platform& platform) {
return platform == Platform::MYRIAD_2 ? 12 : 19;
int DeviceResources::numSlices(const ncDevicePlatform_t& platform) {
return platform == ncDevicePlatform_t::NC_MYRIAD_2 ? 12 : 19;
}
int DeviceResources::numStreams() {
return 3;
}
int DefaultAllocation::numStreams(const Platform& platform, const CompilationConfig& configuration) {
return platform == Platform::MYRIAD_X && configuration.hwOptimization ? 2 : 1;
int DefaultAllocation::numStreams(const ncDevicePlatform_t& platform, const PluginConfiguration& configuration) {
return platform == ncDevicePlatform_t::NC_MYRIAD_X && configuration.compileConfig().hwOptimization ? 2 : 1;
}
int DefaultAllocation::numSlices(const Platform& platform, int numStreams) {
int DefaultAllocation::numSlices(const ncDevicePlatform_t& platform, int numStreams) {
const auto capabilities = DeviceResources::numSlices(platform);
return capabilities / numStreams;
}
int DefaultAllocation::numShaves(const Platform& platform, int numStreams, int numSlices) {
int DefaultAllocation::numShaves(const ncDevicePlatform_t& platform, int numStreams, int numSlices) {
const auto numAvailableShaves = DeviceResources::numShaves(platform);
if (numStreams == 1) {
return numAvailableShaves;

View File

@ -10,6 +10,7 @@
#include <string>
#include <vpu/compile_env.hpp>
#include <vpu/configuration/options/copy_optimization.hpp>
namespace vpu {
@ -93,7 +94,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
ADD_PASS(convertShapeNotation);
ADD_DUMP_PASS("convertShapeNotation");
if (!env.config.disableReorder && !env.config.hwOptimization) {
if (!env.config.compileConfig().disableReorder && !env.config.compileConfig().hwOptimization) {
ADD_PASS(reorderInputsToChannelMinor);
ADD_DUMP_PASS("reorderInputsToChannelMinor");
}
@ -125,7 +126,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// To overcome fp16 limitations
//
if (env.config.hwOptimization && env.config.enableWeightsAnalysis) {
if (env.config.compileConfig().hwOptimization && env.config.compileConfig().enableWeightsAnalysis) {
ADD_PASS(analyzeWeightableLayers);
ADD_DUMP_PASS("analyzeWeightableLayers");
}
@ -150,7 +151,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// Model HW-specific optimizations
//
if (env.config.hwOptimization) {
if (env.config.compileConfig().hwOptimization) {
ADD_PASS(replaceFCbyConv);
ADD_DUMP_PASS("replaceFCbyConv");
@ -161,7 +162,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
ADD_PASS(replaceDeconvByConv);
ADD_DUMP_PASS("replaceDeconvByConv");
if (env.config.hwDilation) {
if (env.config.compileConfig().hwDilation) {
ADD_PASS(reshapeDilationConv);
ADD_DUMP_PASS("reshapeDilationConv");
}
@ -173,7 +174,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// Pass should be located before "adjustDataBatch" because "adjustDataBatch" specifies "origConvOutput" attribute
// for convolution in order to provide that information to "hwConvTiling" pass.
// Otherwise, "hwConvTiling" will see incorrect values in "origConvOutput" attribute.
if (env.config.enableCustomReshapeParam) {
if (env.config.compileConfig().enableCustomReshapeParam) {
ADD_PASS(reshapeBeforeConvTiling);
ADD_DUMP_PASS("reshapeBeforeConvTiling");
}
@ -197,7 +198,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
ADD_PASS(hwPadding);
ADD_DUMP_PASS("hwPadding");
if (env.config.hwOptimization) {
if (env.config.compileConfig().hwOptimization) {
ADD_PASS(splitLargeKernelConv);
ADD_DUMP_PASS("splitLargeKernelConv");
}
@ -209,7 +210,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
ADD_PASS(adjustDataBatch);
ADD_DUMP_PASS("adjustDataBatch");
if (env.config.enableReplWithSCRelu) {
if (env.config.compileConfig().enableReplWithSCRelu) {
ADD_PASS(replaceWithSCReLU);
ADD_DUMP_PASS("replaceWithSCReLU");
}
@ -218,13 +219,13 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// HW stages tiling
//
if (env.config.hwOptimization) {
if (env.config.compileConfig().hwOptimization) {
ADD_PASS(hwConvTiling);
ADD_PASS(hwPoolTiling);
ADD_PASS(hwFullyConnectedTiling);
ADD_DUMP_PASS("hwTiling");
if (env.config.hwExtraSplit) {
if (env.config.compileConfig().hwExtraSplit) {
ADD_PASS(hwExtraSplit);
ADD_DUMP_PASS("hwExtraSplit");
}
@ -242,7 +243,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
//
// this stage should be executed after "hwPoolTiling"
// and before "swPoolAdaptation"
if (env.config.enableReplaceWithReduceMean) {
if (env.config.compileConfig().enableReplaceWithReduceMean) {
ADD_PASS(replaceWithReduceMean);
ADD_DUMP_PASS("replaceWithReduceMean");
}
@ -261,7 +262,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
ADD_PASS(mergeReLUAndBias);
ADD_DUMP_PASS("mergeReLUAndBias");
if (env.config.enableEarlyEltwiseReLUFusion) {
if (env.config.compileConfig().enableEarlyEltwiseReLUFusion) {
ADD_PASS(mergeEltwiseAndReLUDynamic);
ADD_DUMP_PASS("mergeEltwiseAndReLUDynamic");
}
@ -279,7 +280,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// TODO: mergePermute support for reorder stage too.
// TODO: pass that will swap Permute and per-element operations.
if (env.config.enablePermuteMerging) {
if (env.config.compileConfig().enablePermuteMerging) {
ADD_PASS(mergePermuteStages);
ADD_DUMP_PASS("mergePermuteStages");
}
@ -326,7 +327,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// Model common optimizations
//
if (env.config.copyOptimization.getOrDefault(true)) {
if (env.config.get<CopyOptimizationOption>()) {
ADD_PASS(eliminateCopyStages);
ADD_DUMP_PASS("eliminateCopyStages");
}
@ -334,7 +335,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
//
// HW/SW injection
if (env.config.hwOptimization && env.config.injectSwOps.getOrDefault(true)) {
if (env.config.compileConfig().hwOptimization && env.config.compileConfig().injectSwOps.getOrDefault(true)) {
ADD_PASS(injectSw);
ADD_DUMP_PASS("injectSw");
}
@ -350,7 +351,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
// HW stages finalization
//
if (env.config.hwOptimization) {
if (env.config.compileConfig().hwOptimization) {
ADD_PASS(finalizeHwOps);
ADD_DUMP_PASS("hwFinalization");
}
@ -361,7 +362,7 @@ PassSet::Ptr PassManager::buildMiddleEnd() {
ADD_PASS(markFastStages);
ADD_DUMP_PASS("markFastStages");
if (env.config.enableMemoryTypesAnnotation) {
if (env.config.compileConfig().enableMemoryTypesAnnotation) {
ADD_PASS(annotateMemoryTypes);
ADD_DUMP_PASS("annotateMemoryTypes");
}

View File

@ -48,7 +48,7 @@ void PassImpl::run(const Model& model) {
allocNonIntermediateData(model);
adjustModelForMemReqs(model);
copyHwMisalignedInput(model);
if (env.config.packDataInCmx.getOrDefault(true)) {
if (env.config.compileConfig().packDataInCmx.getOrDefault(true)) {
packDataInCmx(model);
}
}
@ -147,7 +147,7 @@ void PassImpl::collectMemReqs(const Model& model) {
}
void PassImpl::resetStageOrder(const Model& model) {
if (!CompileEnv::get().config.hwOptimization)
if (!CompileEnv::get().config.compileConfig().hwOptimization)
return;
static const std::string s_expectCMXOutput {"expectCMXOutput"};

View File

@ -14,6 +14,7 @@
#include <vpu/middleend/allocator/allocator.hpp>
#include <vpu/compile_env.hpp>
#include <vpu/configuration/options/copy_optimization.hpp>
namespace vpu {
@ -78,7 +79,7 @@ void PassImpl::run(const Model& model) {
std::queue<Stage> copyToRemove;
if (!env.config.copyOptimization.hasValue()) {
if (!env.config.get<CopyOptimizationOption>()) {
int nCopyStages = 0;
for (const auto& stage : model->getStages()) {
if (stage->type() == StageType::Copy) {

View File

@ -68,7 +68,7 @@ void PassImpl::run(const Model& model) {
// Collect HW and SW candidates
//
if (!env.config.injectSwOps.hasValue() &&
if (!env.config.compileConfig().injectSwOps.hasValue() &&
model->numStages() > nMaxStagesForInjectSw) {
env.log->warning(
"Pass [injectSw] SKIPPED : number of stages (%d) is larger than threshold %d",

View File

@ -30,7 +30,7 @@ private:
};
void PassImpl::run(const Model& model) {
const bool enableEarlyEltwiseReLUFusion = CompileEnv::get().config.enableEarlyEltwiseReLUFusion;
const bool enableEarlyEltwiseReLUFusion = CompileEnv::get().config.compileConfig().enableEarlyEltwiseReLUFusion;
if (enableEarlyEltwiseReLUFusion) {
if (m_mode == MergeMode::DYNAMIC_NETWORK) {
VPU_PROFILE(mergeEltwiseAndReLUDynamic);

View File

@ -170,7 +170,7 @@ void PassImpl::run(const Model& model) {
// Try to merge next Pooling layer
//
if (env.config.mergeHwPoolToConv) {
if (env.config.compileConfig().mergeHwPoolToConv) {
if (stage->type() == StageType::StubConv) {
if (auto nextPoolStage = getNextPoolStage(stage, output)) {
output = nextPoolStage->output(0);

View File

@ -148,7 +148,7 @@ void PassImpl::run(const Model& model) {
auto output = stage->output(0);
const auto& env = CompileEnv::get();
if (env.config.hwDisabled(stage->origLayer()->name)) {
if (env.config.compileConfig().hwDisabled(stage->origLayer()->name)) {
continue;
}

View File

@ -88,7 +88,7 @@ bool isScalable(const Stage& stage) {
bool checkGrowingOutput(const Model& model) {
const auto& env = CompileEnv::get();
if (!env.config.checkPreprocessingInsideModel) {
if (!env.config.compileConfig().checkPreprocessingInsideModel) {
return false;
}
@ -258,7 +258,7 @@ void PassImpl::run(const Model& model) {
scale = static_cast<float>(1ULL << static_cast<std::uint32_t>(shift));
}
if (!env.config.irWithVpuScalesDir.empty()) {
if (!env.config.compileConfig().irWithVpuScalesDir.empty()) {
stage->origLayer()->params["vpu_scale"] = toString(scale);
}
}

View File

@ -199,7 +199,7 @@ StageSHAVEsRequirements StageNode::getSHAVEsRequirements() const {
// return max for Myriad2
const auto& compileEnv = CompileEnv::get();
if (compileEnv.platform == Platform::MYRIAD_2) {
if (compileEnv.platform == ncDevicePlatform_t::NC_MYRIAD_2) {
return StageSHAVEsRequirements::NeedMax;
}

View File

@ -24,7 +24,7 @@ void FrontEnd::parseActivation(const Model& model, const ie::CNNLayerPtr& layer,
const auto type = layer->GetParamAsString("type");
const auto activationParserIt = activationParsers.find(type);
VPU_THROW_UNSUPPORTED_UNLESS(activationParserIt != activationParsers.end(),
VPU_THROW_UNSUPPORTED_LAYER_UNLESS(activationParserIt != activationParsers.end(),
"Failed to compile layer \"%v\"(type = %v) ", layer->name, type);
activationParserIt->second(model, layer, inputs, outputs);

View File

@ -163,9 +163,9 @@ void parseConv2D(const Model & model,
kernelStrideY,
dilationX,
dilationY,
env.config.hwOptimization,
env.config.hwDilation,
env.config.hwDisabled(layer->name));
env.config.compileConfig().hwOptimization,
env.config.compileConfig().hwDilation,
env.config.compileConfig().hwDisabled(layer->name));
//
// Create const datas
@ -476,9 +476,9 @@ void parseConvND(const Model & model,
strides[1],
dilations[0],
dilations[1],
env.config.hwOptimization,
env.config.hwDilation,
env.config.hwDisabled(layer->name));
env.config.compileConfig().hwOptimization,
env.config.compileConfig().hwDilation,
env.config.compileConfig().hwDisabled(layer->name));
int try_hw = tryHW ? 1 : 0;

View File

@ -37,13 +37,13 @@ void FrontEnd::parseFullyConnected(const Model& model, const ie::CNNLayerPtr& _l
// Check if HW is applicable
//
auto tryHW = env.config.hwOptimization;
auto tryHW = env.config.compileConfig().hwOptimization;
if (output->desc().dim(Dim::W, 1) != 1 || output->desc().dim(Dim::H, 1) != 1) {
tryHW = false;
}
if (env.config.hwDisabled(layer->name)) {
if (env.config.compileConfig().hwDisabled(layer->name)) {
tryHW = false;
}

View File

@ -162,7 +162,7 @@ void FrontEnd::parseMTCNN(const Model& model, const ie::CNNLayerPtr& layer, cons
IE_ASSERT(inputs.size() == 1);
IE_ASSERT(outputs.size() == 1);
if (!env.config.hwOptimization) {
if (!env.config.compileConfig().hwOptimization) {
VPU_THROW_EXCEPTION << "MTCNN layer supports Myriad X with NCE only";
}

View File

@ -124,7 +124,7 @@ Stage StageBuilder::addReorderStage(
const Data& output) {
const auto* env = CompileEnv::getOrNull();
VPU_THROW_UNLESS(
env == nullptr || !env->config.disableReorder,
env == nullptr || !env->config.compileConfig().disableReorder,
"Tried to add Reorder Stage %v, while DISABLE_REORDER option was set",
name);

View File

@ -221,8 +221,8 @@ void parsePool2D(const Model & model,
//
const auto& env = CompileEnv::get();
bool hwOptimization = env.config.hwOptimization;
bool hwDisabled = env.config.hwDisabled(layer->name);
bool hwOptimization = env.config.compileConfig().hwOptimization;
bool hwDisabled = env.config.compileConfig().hwDisabled(layer->name);
int inputWidth = input->desc().dim(Dim::W);
int inputHeight = input->desc().dim(Dim::H);
@ -480,8 +480,8 @@ void parsePoolND(const Model & model,
//
const auto& env = CompileEnv::get();
bool hwOptimization = env.config.hwOptimization;
bool hwDisabled = env.config.hwDisabled(layer->name);
bool hwOptimization = env.config.compileConfig().hwOptimization;
bool hwDisabled = env.config.compileConfig().hwDisabled(layer->name);
bool tryHW = canTryHW(poolLayer->_type,
input_shape[0],

View File

@ -0,0 +1,31 @@
// Copyright (C) 2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#include "configuration/myriad_configuration.hpp"
namespace vpu {
MyriadConfiguration::MyriadConfiguration() {}
void MyriadConfiguration::from(const std::map<std::string, std::string>& configuration) {
std::map<std::string, std::string> migratedOptions, notMigratedOptions;
for (const auto& entry : configuration) {
auto& destination = PluginConfiguration::supports(entry.first) ? migratedOptions : notMigratedOptions;
destination.emplace(entry);
}
PluginConfiguration::from(migratedOptions);
update(notMigratedOptions);
}
void MyriadConfiguration::fromAtRuntime(const std::map<std::string, std::string>& configuration) {
std::map<std::string, std::string> migratedOptions, notMigratedOptions;
for (const auto& entry : configuration) {
auto& destination = PluginConfiguration::supports(entry.first) ? migratedOptions : notMigratedOptions;
destination.emplace(entry);
}
PluginConfiguration::fromAtRuntime(migratedOptions);
update(notMigratedOptions, ConfigMode::RunTime);
}
} // namespace vpu

View File

@ -0,0 +1,21 @@
// Copyright (C) 2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include "vpu/configuration/plugin_configuration.hpp"
#include "myriad_config.h"
namespace vpu {
class MyriadConfiguration final : public PluginConfiguration, public MyriadPlugin::MyriadConfig {
public:
MyriadConfiguration();
// TODO: remove once all options are migrated
void from(const std::map<std::string, std::string>& configuration);
void fromAtRuntime(const std::map<std::string, std::string>& configuration);
};
} // namespace vpu

View File

@ -34,7 +34,7 @@ VPU_DECLARE_ENUM(MovidiusDdrType,
MICRON_1GB = 4,
)
class MyriadConfig final : public ParsedConfig {
class MyriadConfig : public virtual ParsedConfig {
public:
const std::string& pluginLogFilePath() const {
return _pluginLogFilePath;

View File

@ -14,6 +14,7 @@
#include <vpu/utils/runtime_graph.hpp>
#include <legacy/net_pass.h>
#include <vpu/compile_env.hpp>
#include <vpu/configuration/options/log_level.hpp>
using namespace InferenceEngine;
@ -25,23 +26,24 @@ namespace MyriadPlugin {
ExecutableNetwork::ExecutableNetwork(
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr>& devicePool,
const MyriadConfig& config,
const MyriadConfiguration& config,
const ie::ICore* core) :
_config(config),
_core(core) {
VPU_PROFILE(ExecutableNetwork);
const auto& logLevel = _config.get<LogLevelOption>();
_log = std::make_shared<Logger>(
"MyriadPlugin",
_config.logLevel(),
logLevel,
defaultOutput(_config.pluginLogFilePath()));
_executor = std::make_shared<MyriadExecutor>(_config.forceReset(), std::move(mvnc), _config.logLevel(), _log);
_executor = std::make_shared<MyriadExecutor>(_config.forceReset(), std::move(mvnc), logLevel, _log);
_device = _executor->openDevice(devicePool, _config);
const auto& compileConfig = config.compileConfig();
const auto& revision = _device->revision();
_actualNumExecutors = compileConfig.numExecutors != -1 ? compileConfig.numExecutors : DefaultAllocation::numStreams(revision, compileConfig);
_actualNumExecutors = config.compileConfig().numExecutors != -1 ? config.compileConfig().numExecutors : DefaultAllocation::numStreams(revision, config);
_supportedMetrics = {
METRIC_KEY(NETWORK_NAME),
@ -56,22 +58,22 @@ ExecutableNetwork::ExecutableNetwork(
const ie::CNNNetwork& network,
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr>& devicePool,
const MyriadConfig& config,
const MyriadConfiguration& config,
const ie::ICore* core) :
ExecutableNetwork(std::move(mvnc), devicePool, config, core) {
VPU_PROFILE(ExecutableNetwork);
const auto compilerLog = std::make_shared<Logger>(
"GraphCompiler",
_config.logLevel(),
_config.get<LogLevelOption>(),
defaultOutput(_config.compilerLogFilePath()));
if (_device == nullptr)
IE_THROW() << "No device was detected";
auto compiledGraph = compileNetwork(
network,
static_cast<Platform>(_device->_platform),
_config.compileConfig(),
_device->_platform,
_config,
compilerLog,
_core);
@ -100,9 +102,7 @@ ExecutableNetwork::ExecutableNetwork(
}
}
void ExecutableNetwork::Import(std::istream& strm,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config) {
void ExecutableNetwork::Import(std::istream& strm, std::vector<DevicePtr> &devicePool, const MyriadConfiguration& configuration) {
auto currentPos = strm.tellg();
strm.seekg(0, strm.end);
auto blobSize = strm.tellg() - currentPos;
@ -147,11 +147,8 @@ void ExecutableNetwork::Import(std::istream& strm,
}
}
ExecutableNetwork::ExecutableNetwork(std::istream& strm,
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config,
const ie::ICore* core) :
ExecutableNetwork::ExecutableNetwork(std::istream& strm, std::shared_ptr<IMvnc> mvnc, std::vector<DevicePtr> &devicePool,
const MyriadConfiguration& config, const ie::ICore* core) :
ExecutableNetwork(std::move(mvnc), devicePool, config, core) {
VPU_PROFILE(ExecutableNetwork);
Import(strm, devicePool, config);
@ -161,7 +158,7 @@ ExecutableNetwork::ExecutableNetwork(
const std::string& blobFilename,
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr>& devicePool,
const MyriadConfig& config,
const MyriadConfiguration& config,
const ie::ICore* core) :
ExecutableNetwork(std::move(mvnc), devicePool, config, core) {
VPU_PROFILE(ExecutableNetwork);

View File

@ -32,23 +32,14 @@ class ExecutableNetwork : public ie::ExecutableNetworkThreadSafeDefault {
public:
typedef std::shared_ptr<ExecutableNetwork> Ptr;
explicit ExecutableNetwork(const ie::CNNNetwork& network,
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config,
const ie::ICore* core);
ExecutableNetwork(const InferenceEngine::CNNNetwork& network, std::shared_ptr<IMvnc> mvnc, std::vector<DevicePtr> &devicePool,
const MyriadConfiguration& configuration, const ie::ICore* core);
explicit ExecutableNetwork(std::istream& strm,
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config,
const ie::ICore* core);
ExecutableNetwork(std::istream& strm, std::shared_ptr<IMvnc> mvnc, std::vector<DevicePtr> &devicePool, const MyriadConfiguration& configuration,
const ie::ICore* core);
explicit ExecutableNetwork(const std::string &blobFilename,
std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config,
const ie::ICore* core);
ExecutableNetwork(const std::string &blobFilename, std::shared_ptr<IMvnc> mvnc, std::vector<DevicePtr> &devicePool,
const MyriadConfiguration& configuration, const ie::ICore* core);
virtual ~ExecutableNetwork() {
@ -97,9 +88,7 @@ public:
ie::CNNNetwork GetExecGraphInfo() override;
void Import(std::istream& strm,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config);
void Import(std::istream& strm, std::vector<DevicePtr> &devicePool, const MyriadConfiguration& configuration);
private:
Logger::Ptr _log;
@ -108,7 +97,7 @@ private:
GraphDesc _graphDesc;
DevicePtr _device;
GraphMetaInfo _graphMetaData;
MyriadConfig _config;
MyriadConfiguration _config;
const ie::ICore* _core = nullptr;
int _actualNumExecutors = 0;
std::vector<std::string> _supportedMetrics;
@ -119,10 +108,7 @@ private:
const size_t _maxTaskExecutorGetResultCount = 1;
std::queue<std::string> _taskExecutorGetResultIds;
ExecutableNetwork(std::shared_ptr<IMvnc> mvnc,
std::vector<DevicePtr> &devicePool,
const MyriadConfig& config,
const ie::ICore* core);
ExecutableNetwork(std::shared_ptr<IMvnc> mvnc, std::vector<DevicePtr> &devicePool, const MyriadConfiguration& config, const ie::ICore* core);
ie::ITaskExecutor::Ptr getNextTaskExecutor() {
std::string id = _taskExecutorGetResultIds.front();

View File

@ -73,8 +73,7 @@ MyriadExecutor::MyriadExecutor(bool forceReset, std::shared_ptr<IMvnc> mvnc,
/*
* @brief Boot available device
*/
ncStatus_t MyriadExecutor::bootNextDevice(std::vector<DevicePtr> &devicePool,
const MyriadConfig& config) {
ncStatus_t MyriadExecutor::bootNextDevice(std::vector<DevicePtr> &devicePool, const MyriadConfiguration& config) {
VPU_PROFILE(bootNextDevice);
// #-17972, #-16790
#if defined(NO_BOOT)
@ -221,7 +220,7 @@ ncStatus_t MyriadExecutor::bootNextDevice(std::vector<DevicePtr> &devicePool,
}
DevicePtr MyriadExecutor::openDevice(std::vector<DevicePtr>& devicePool,
const MyriadConfig& config) {
const MyriadConfiguration& config) {
VPU_PROFILE(openDevice);
std::lock_guard<std::mutex> lock(device_mutex);

View File

@ -13,6 +13,7 @@
#include <mvnc.h>
#include "myriad_mvnc_wrapper.h"
#include "configuration/myriad_configuration.hpp"
#include <ie_parameter.hpp>
@ -63,9 +64,9 @@ struct DeviceDesc {
((config.protocol() == NC_ANY_PROTOCOL) || (_protocol == config.protocol()));
}
Platform revision() const {
ncDevicePlatform_t revision() const {
VPU_THROW_UNLESS(_platform != NC_ANY_PLATFORM, "Cannot get a revision from not booted device");
return _platform == NC_MYRIAD_2 ? Platform::MYRIAD_2 : Platform::MYRIAD_X;
return _platform;
}
};
@ -86,7 +87,7 @@ public:
* @brief Get myriad device
* @return Already booted and empty device or new booted device
*/
DevicePtr openDevice(std::vector<DevicePtr> &devicePool, const MyriadConfig& config);
DevicePtr openDevice(std::vector<DevicePtr> &devicePool, const MyriadConfiguration& config);
static void closeDevices(std::vector<DevicePtr> &devicePool, std::shared_ptr<IMvnc> mvnc);
@ -134,8 +135,7 @@ private:
* @param configPlatform Boot the selected platform
* @param configProtocol Boot device with selected protocol
*/
ncStatus_t bootNextDevice(std::vector<DevicePtr> &devicePool,
const MyriadConfig& config);
ncStatus_t bootNextDevice(std::vector<DevicePtr> &devicePool, const MyriadConfiguration& config);
};
typedef std::shared_ptr<MyriadExecutor> MyriadExecutorPtr;

View File

@ -14,6 +14,7 @@
#include <vpu/utils/logger.hpp>
#include <vpu/utils/ie_helpers.hpp>
#include <vpu/graph_transformer.hpp>
#include "myriad_executor.h"
#include "myriad_config.h"

View File

@ -31,6 +31,7 @@ MyriadMetrics::MyriadMetrics() {
};
IE_SUPPRESS_DEPRECATED_START
// TODO: remove once all options are migrated
_supportedConfigKeys = {
MYRIAD_ENABLE_HW_ACCELERATION,
MYRIAD_ENABLE_RECEIVING_TENSOR_TIME,
@ -45,7 +46,6 @@ IE_SUPPRESS_DEPRECATED_START
KEY_VPU_MYRIAD_FORCE_RESET,
KEY_VPU_MYRIAD_PLATFORM,
CONFIG_KEY(LOG_LEVEL),
CONFIG_KEY(EXCLUSIVE_ASYNC_REQUESTS),
CONFIG_KEY(PERF_COUNT),
CONFIG_KEY(CONFIG_FILE),

View File

@ -18,8 +18,9 @@
#include <vpu/utils/profiling.hpp>
#include <vpu/utils/error.hpp>
#include <vpu/ngraph/query_network.hpp>
#include <transformations/common_optimizations/common_optimizations.hpp>
#include <ngraph/pass/manager.hpp>
#include <vpu/configuration/options/log_level.hpp>
#include <vpu/configuration/options/copy_optimization.hpp>
#include "myriad_plugin.h"
@ -34,31 +35,40 @@ IExecutableNetworkInternal::Ptr Engine::LoadExeNetworkImpl(
const std::map<std::string, std::string>& config) {
VPU_PROFILE(LoadExeNetworkImpl);
auto parsedConfigCopy = _parsedConfig;
parsedConfigCopy.update(config);
auto executableNetworkConfiguration = _parsedConfig;
executableNetworkConfiguration.from(config);
executableNetworkConfiguration.validate();
return std::make_shared<ExecutableNetwork>(network, _mvnc, _devicePool, parsedConfigCopy, GetCore());
return std::make_shared<ExecutableNetwork>(network, _mvnc, _devicePool, executableNetworkConfiguration, GetCore());
}
void Engine::SetConfig(const std::map<std::string, std::string> &config) {
_parsedConfig.update(config);
_parsedConfig.from(config);
// TODO: remove once all options are migrated
for (const auto& entry : config) {
_config[entry.first] = entry.second;
}
#ifndef NDEBUG
if (const auto envVar = std::getenv("IE_VPU_LOG_LEVEL")) {
_parsedConfig.set(LogLevelOption::key(), envVar);
}
#endif
}
Parameter Engine::GetConfig(const std::string& name, const std::map<std::string, Parameter>& options) const {
auto supported_keys = _metrics->SupportedConfigKeys();
if (std::find(supported_keys.begin(),
supported_keys.end(), name) == supported_keys.end()) {
IE_THROW() << "Unsupported config key : " << name;
}
// TODO: remove once all options are migrated
const auto& supportedKeys = _metrics->SupportedConfigKeys();
VPU_THROW_UNSUPPORTED_OPTION_UNLESS(supportedKeys.count(name) == 1 || _parsedConfig.supports(name), "Unsupported configuration key: {}", name);
Parameter result;
auto option = _config.find(name);
if (option != _config.end())
result = option->second;
if (_parsedConfig.supports(name)) {
result = _parsedConfig.asParameter(name);
} else if (_config.count(name)) {
// TODO: remove once all options are migrated
result = _config.at(name);
}
return result;
}
@ -70,7 +80,7 @@ QueryNetworkResult Engine::QueryNetwork(
QueryNetworkResult res;
auto parsedConfigCopy = _parsedConfig;
parsedConfigCopy.update(config);
parsedConfigCopy.from(config);
const auto deviceName = parsedConfigCopy.deviceName();
if (!deviceName.empty()) {
@ -80,13 +90,13 @@ QueryNetworkResult Engine::QueryNetwork(
const auto log = std::make_shared<Logger>(
"GraphCompiler",
parsedConfigCopy.logLevel(),
_parsedConfig.get<LogLevelOption>(),
defaultOutput(parsedConfigCopy.compilerLogFilePath()));
const auto supportedLayers = getSupportedLayers(
network,
static_cast<Platform>(parsedConfigCopy.platform()),
parsedConfigCopy.compileConfig(),
parsedConfigCopy.platform(),
parsedConfigCopy,
log,
GetCore());
@ -111,6 +121,7 @@ Engine::Engine(std::shared_ptr<IMvnc> mvnc) :
_pluginName = "MYRIAD";
// TODO: remove once all options are migrated
IE_SUPPRESS_DEPRECATED_START
_config = {
{ MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(YES) },
@ -126,13 +137,19 @@ IE_SUPPRESS_DEPRECATED_START
{ KEY_VPU_MYRIAD_FORCE_RESET, CONFIG_VALUE(NO) },
{ KEY_VPU_MYRIAD_PLATFORM, "" },
{ KEY_LOG_LEVEL, CONFIG_VALUE(LOG_NONE) },
{ KEY_EXCLUSIVE_ASYNC_REQUESTS, CONFIG_VALUE(NO) },
{ KEY_PERF_COUNT, CONFIG_VALUE(NO) },
{ KEY_CONFIG_FILE, "" },
{ KEY_DEVICE_ID, "" },
};
IE_SUPPRESS_DEPRECATED_END
_parsedConfig.registerOption<LogLevelOption>();
_parsedConfig.registerOption<CopyOptimizationOption>();
IE_SUPPRESS_DEPRECATED_START
_parsedConfig.registerDeprecatedOption<LogLevelOption>(VPU_CONFIG_KEY(LOG_LEVEL));
IE_SUPPRESS_DEPRECATED_END
}
InferenceEngine::IExecutableNetworkInternal::Ptr Engine::ImportNetwork(
@ -140,14 +157,12 @@ InferenceEngine::IExecutableNetworkInternal::Ptr Engine::ImportNetwork(
const std::map<std::string, std::string>& config) {
VPU_PROFILE(ImportNetwork);
auto parsedConfigCopy = _parsedConfig;
parsedConfigCopy.update(config, ConfigMode::RunTime);
auto executableNetworkConfiguration = _parsedConfig;
executableNetworkConfiguration.fromAtRuntime(config);
executableNetworkConfiguration.validate();
const auto executableNetwork =
std::make_shared<ExecutableNetwork>(
model, _mvnc, _devicePool, parsedConfigCopy, GetCore());
const auto executableNetwork = std::make_shared<ExecutableNetwork>(model, _mvnc, _devicePool, executableNetworkConfiguration, GetCore());
executableNetwork->SetPointerToPlugin(shared_from_this());
return executableNetwork;
}
@ -186,7 +201,10 @@ InferenceEngine::Parameter Engine::GetMetric(const std::string& name,
const auto& supportedMetrics = _metrics->SupportedMetrics();
IE_SET_METRIC_RETURN(SUPPORTED_METRICS, std::vector<std::string>{supportedMetrics.cbegin(), supportedMetrics.cend()});
} else if (name == METRIC_KEY(SUPPORTED_CONFIG_KEYS)) {
const auto& supportedConfigKeys = _metrics->SupportedConfigKeys();
// TODO: remove once all options are migrated
auto supportedConfigKeys = _metrics->SupportedConfigKeys();
const auto& publicKeys = _parsedConfig.getPublicKeys();
supportedConfigKeys.insert(publicKeys.cbegin(), publicKeys.cend());
IE_SET_METRIC_RETURN(SUPPORTED_CONFIG_KEYS, std::vector<std::string>{supportedConfigKeys.cbegin(), supportedConfigKeys.cend()});
} else if (name == METRIC_KEY(OPTIMIZATION_CAPABILITIES)) {
const auto& optimizationCapabilities = _metrics->OptimizationCapabilities();

View File

@ -8,6 +8,7 @@
#include "myriad_executable_network.h"
#include "myriad_mvnc_wrapper.h"
#include "myriad_metrics.h"
#include "configuration/myriad_configuration.hpp"
#include <memory>
#include <string>
#include <vector>
@ -50,7 +51,7 @@ public:
const std::map<std::string, ie::Parameter>& options) const override;
private:
MyriadConfig _parsedConfig;
MyriadConfiguration _parsedConfig;
std::vector<DevicePtr> _devicePool;
std::shared_ptr<IMvnc> _mvnc;
std::shared_ptr<MyriadMetrics> _metrics;

View File

@ -246,6 +246,18 @@ TEST_F(ParameterTests, ParametersCStringEqual) {
ASSERT_FALSE(p1 != p2);
}
TEST_F(ParameterTests, MapOfParametersEqual) {
std::map<std::string, Parameter> map0;
map0["testParamInt"] = 4;
map0["testParamString"] = "test";
const auto map1 = map0;
Parameter p0 = map0;
Parameter p1 = map1;
ASSERT_TRUE(p0 == p1);
ASSERT_FALSE(p0 != p1);
}
TEST_F(ParameterTests, CompareParametersWithoutEqualOperator) {
class TestClass {
public:
@ -312,4 +324,95 @@ TEST_F(ParameterTests, ParameterRemovedRealObjectPointerWithDuplication) {
}
ASSERT_EQ(1, DestructorTest::constructorCount);
ASSERT_EQ(1, DestructorTest::destructorCount);
}
}
TEST_F(ParameterTests, PrintToEmptyParameterDoesNothing) {
Parameter p;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}
TEST_F(ParameterTests, PrintToIntParameter) {
int value = -5;
Parameter p = value;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::to_string(value));
}
TEST_F(ParameterTests, PrintToUIntParameter) {
unsigned int value = 5;
Parameter p = value;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::to_string(value));
}
TEST_F(ParameterTests, PrintToSize_tParameter) {
std::size_t value = 5;
Parameter p = value;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::to_string(value));
}
TEST_F(ParameterTests, PrintToFloatParameter) {
Parameter p = 5.5f;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{"5.5"});
}
TEST_F(ParameterTests, PrintToStringParameter) {
std::string value = "some text";
Parameter p = value;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), value);
}
TEST_F(ParameterTests, PrintToVectorOfIntsParameterDoesNothing) {
Parameter p = std::vector<int>{-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5};
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}
TEST_F(ParameterTests, PrintToVectorOfUIntsParameterDoesNothing) {
Parameter p = std::vector<unsigned int>{0, 1, 2, 3, 4, 5};
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}
TEST_F(ParameterTests, PrintToVectorOfSize_tParameterDoesNothing) {
Parameter p = std::vector<std::size_t>{0, 1, 2, 3, 4, 5};
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}
TEST_F(ParameterTests, PrintToVectorOfFloatsParameterDoesNothing) {
Parameter p = std::vector<float>{0.0f, 1.1f, 2.2f, 3.3f, 4.4f, 5.5f};
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}
TEST_F(ParameterTests, PrintToVectorOfStringsParameterDoesNothing) {
Parameter p = std::vector<std::string>{"zero", "one", "two", "three", "four", "five"};
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}
TEST_F(ParameterTests, PrintToMapOfParametersDoesNothing) {
std::map<std::string, Parameter> refMap;
refMap["testParamInt"] = 4;
refMap["testParamString"] = "test";
Parameter p = refMap;
std::stringstream stream;
ASSERT_NO_THROW(PrintTo(p, &stream));
ASSERT_EQ(stream.str(), std::string{});
}

View File

@ -4,19 +4,32 @@
set(TARGET_NAME myriadFuncTests)
disable_deprecated_warnings()
include(${XLINK_DIR}/XLink.cmake)
addIeTargetTest(
NAME ${TARGET_NAME}
ROOT ${CMAKE_CURRENT_SOURCE_DIR}
INCLUDES
${CMAKE_CURRENT_SOURCE_DIR}
${IE_MAIN_SOURCE_DIR}/src/vpu/graph_transformer/include
${IE_MAIN_SOURCE_DIR}/tests_deprecated/behavior/vpu/myriad_tests/helpers
${XLINK_INCLUDE}
${XLINK_PLATFORM_INCLUDE}
DEPENDENCIES
myriadPlugin
LINK_LIBRARIES
vpu_common_lib
vpu_graph_transformer
funcSharedTests
mvnc
ADD_CPPLINT
DEFINES
__PC__
OBJECT_FILES
${IE_MAIN_SOURCE_DIR}/tests_deprecated/behavior/vpu/myriad_tests/helpers/myriad_devices.hpp
${IE_MAIN_SOURCE_DIR}/tests_deprecated/behavior/vpu/myriad_tests/helpers/myriad_devices.cpp
LABELS
VPU
MYRIAD

View File

@ -5,209 +5,353 @@
#include "vpu/vpu_plugin_config.hpp"
#include "vpu/private_plugin_config.hpp"
#include "behavior/config.hpp"
#include "myriad_devices.hpp"
IE_SUPPRESS_DEPRECATED_START
using namespace BehaviorTestsDefinitions;
namespace {
const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
InferenceEngine::Precision::FP16
using namespace BehaviorTestsDefinitions;
using namespace InferenceEngine::PluginConfigParams;
const std::vector<InferenceEngine::Precision>& getPrecisions() {
static const std::vector<InferenceEngine::Precision> precisions = {
InferenceEngine::Precision::FP32,
InferenceEngine::Precision::FP16,
};
return precisions;
}
std::vector<std::map<std::string, std::string>> getCorrectConfigs() {
std::vector<std::map<std::string, std::string>> correctConfigs = {
{{KEY_LOG_LEVEL, LOG_NONE}},
{{KEY_LOG_LEVEL, LOG_ERROR}},
{{KEY_LOG_LEVEL, LOG_WARNING}},
{{KEY_LOG_LEVEL, LOG_INFO}},
{{KEY_LOG_LEVEL, LOG_DEBUG}},
{{KEY_LOG_LEVEL, LOG_TRACE}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, CONFIG_VALUE(NO)}},
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(NO)}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "-1"}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "0"}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "10"}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, CONFIG_VALUE(NO)}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "1"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "2"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "3"}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, CONFIG_VALUE(NO)}},
// Deprecated
{{VPU_CONFIG_KEY(LOG_LEVEL), LOG_NONE}},
{{VPU_CONFIG_KEY(LOG_LEVEL), LOG_ERROR}},
{{VPU_CONFIG_KEY(LOG_LEVEL), LOG_WARNING}},
{{VPU_CONFIG_KEY(LOG_LEVEL), LOG_INFO}},
{{VPU_CONFIG_KEY(LOG_LEVEL), LOG_DEBUG}},
{{VPU_CONFIG_KEY(LOG_LEVEL), LOG_TRACE}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), CONFIG_VALUE(YES)}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), CONFIG_VALUE(NO)}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), CONFIG_VALUE(YES)}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), CONFIG_VALUE(NO)}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), CONFIG_VALUE(YES)}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), CONFIG_VALUE(NO)}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), VPU_MYRIAD_CONFIG_VALUE(2480)}},
{
{KEY_LOG_LEVEL, LOG_INFO},
{InferenceEngine::MYRIAD_COPY_OPTIMIZATION, InferenceEngine::PluginConfigParams::NO},
{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, CONFIG_VALUE(YES)},
{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(YES)},
{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "10"},
{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, CONFIG_VALUE(YES)},
{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "1"},
{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, CONFIG_VALUE(YES)},
},
};
const std::vector<std::map<std::string, std::string>> Configs = {
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, CONFIG_VALUE(NO)}},
MyriadDevicesInfo info;
if (info.getAmountOfDevices(ncDeviceProtocol_t::NC_PCIE) > 0) {
correctConfigs.emplace_back(std::map<std::string, std::string>{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), VPU_MYRIAD_CONFIG_VALUE(PCIE)}});
correctConfigs.emplace_back(std::map<std::string, std::string>{{InferenceEngine::MYRIAD_PROTOCOL, InferenceEngine::MYRIAD_PCIE}});
}
{{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_NONE)}},
{{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_ERROR)}},
{{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_WARNING)}},
{{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_INFO)}},
{{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_DEBUG)}},
{{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_TRACE)}},
if (info.getAmountOfDevices(ncDeviceProtocol_t::NC_USB) > 0) {
correctConfigs.emplace_back(std::map<std::string, std::string>{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), VPU_MYRIAD_CONFIG_VALUE(USB)}});
correctConfigs.emplace_back(std::map<std::string, std::string>{{InferenceEngine::MYRIAD_PROTOCOL, InferenceEngine::MYRIAD_USB}});
}
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(NO)}},
return correctConfigs;
}
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "-1"}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "0"}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "10"}},
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(getCorrectConfigs())),
CorrectConfigTests::getTestCaseName);
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, CONFIG_VALUE(NO)}},
{{InferenceEngine::MYRIAD_PROTOCOL, InferenceEngine::MYRIAD_USB}},
{{InferenceEngine::MYRIAD_PROTOCOL, InferenceEngine::MYRIAD_PCIE}},
const std::vector<std::map<std::string, std::string>>& getCorrectMultiConfigs() {
static const std::vector<std::map<std::string, std::string>> correctMultiConfigs = {
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{KEY_LOG_LEVEL, LOG_DEBUG},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{InferenceEngine::MYRIAD_COPY_OPTIMIZATION, InferenceEngine::PluginConfigParams::NO},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, YES},
},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "1"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "2"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "3"}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, CONFIG_VALUE(YES)}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, CONFIG_VALUE(NO)}},
// Deprecated
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), CONFIG_VALUE(YES)}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), CONFIG_VALUE(NO)}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), CONFIG_VALUE(YES)}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), CONFIG_VALUE(NO)}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), CONFIG_VALUE(YES)}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), CONFIG_VALUE(NO)}},
{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), VPU_MYRIAD_CONFIG_VALUE(USB)}},
{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), VPU_MYRIAD_CONFIG_VALUE(PCIE)}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), VPU_MYRIAD_CONFIG_VALUE(2450)}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), VPU_MYRIAD_CONFIG_VALUE(2480)}}
// Deprecated
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_CONFIG_KEY(LOG_LEVEL), LOG_DEBUG},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), CONFIG_VALUE(YES)},
},
};
return correctMultiConfigs;
}
const std::vector<std::map<std::string, std::string>> MultiConfigs = {
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{CONFIG_KEY(LOG_LEVEL), CONFIG_VALUE(LOG_DEBUG)}},
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(YES)}},
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, CorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(getCorrectMultiConfigs())),
CorrectConfigTests::getTestCaseName);
// Deprecated
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), CONFIG_VALUE(YES)}}
const std::vector<std::pair<std::string, InferenceEngine::Parameter>>& getDefaultEntries() {
static const std::vector<std::pair<std::string, InferenceEngine::Parameter>> defaultEntries = {
{KEY_LOG_LEVEL, {LOG_NONE}},
};
return defaultEntries;
}
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(Configs)),
CorrectConfigTests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectSingleOptionDefaultValueConfigTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(getDefaultEntries())));
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, CorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(MultiConfigs)),
CorrectConfigTests::getTestCaseName);
const std::vector<std::tuple<std::string, std::string, InferenceEngine::Parameter>>& getCustomEntries() {
static const std::vector<std::tuple<std::string, std::string, InferenceEngine::Parameter>> customEntries = {
std::make_tuple(KEY_LOG_LEVEL, LOG_NONE, InferenceEngine::Parameter{LOG_NONE}),
std::make_tuple(KEY_LOG_LEVEL, LOG_ERROR, InferenceEngine::Parameter{LOG_ERROR}),
std::make_tuple(KEY_LOG_LEVEL, LOG_WARNING, InferenceEngine::Parameter{LOG_WARNING}),
std::make_tuple(KEY_LOG_LEVEL, LOG_INFO, InferenceEngine::Parameter{LOG_INFO}),
std::make_tuple(KEY_LOG_LEVEL, LOG_DEBUG, InferenceEngine::Parameter{LOG_DEBUG}),
std::make_tuple(KEY_LOG_LEVEL, LOG_TRACE, InferenceEngine::Parameter{LOG_TRACE}),
const std::vector<std::map<std::string, std::string>> inconfigs = {
{{InferenceEngine::MYRIAD_PROTOCOL, "BLUETOOTH"}},
{{InferenceEngine::MYRIAD_PROTOCOL, "LAN"}},
std::make_tuple(VPU_CONFIG_KEY(LOG_LEVEL), LOG_NONE, InferenceEngine::Parameter{LOG_NONE}),
std::make_tuple(VPU_CONFIG_KEY(LOG_LEVEL), LOG_ERROR, InferenceEngine::Parameter{LOG_ERROR}),
std::make_tuple(VPU_CONFIG_KEY(LOG_LEVEL), LOG_WARNING, InferenceEngine::Parameter{LOG_WARNING}),
std::make_tuple(VPU_CONFIG_KEY(LOG_LEVEL), LOG_INFO, InferenceEngine::Parameter{LOG_INFO}),
std::make_tuple(VPU_CONFIG_KEY(LOG_LEVEL), LOG_DEBUG, InferenceEngine::Parameter{LOG_DEBUG}),
std::make_tuple(VPU_CONFIG_KEY(LOG_LEVEL), LOG_TRACE, InferenceEngine::Parameter{LOG_TRACE}),
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, "OFF"}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, "OFF"}},
{{CONFIG_KEY(LOG_LEVEL), "VERBOSE"}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "-10"}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, "OFF"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "Two"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "SINGLE"}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, "OFF"}},
// Deprecated
{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), "BLUETOOTH"}},
{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), "LAN"}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), "ON"}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), "OFF"}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), "ON"}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), "OFF"}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), "ON"}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), "OFF"}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "-1"}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "0"}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "1"}},
std::make_tuple(InferenceEngine::MYRIAD_COPY_OPTIMIZATION, InferenceEngine::PluginConfigParams::YES, InferenceEngine::Parameter{true}),
std::make_tuple(InferenceEngine::MYRIAD_COPY_OPTIMIZATION, InferenceEngine::PluginConfigParams::NO, InferenceEngine::Parameter{false}),
};
return customEntries;
}
const std::vector<std::map<std::string, std::string>> multiinconfigs = {
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, "ON"}},
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{CONFIG_KEY(LOG_LEVEL), "VERBOSE"}},
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectSingleOptionCustomValueConfigTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(getCustomEntries())));
// Deprecated
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), "ON"}},
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "-1"}},
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "0"}},
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "1"}},
const std::vector<std::string>& getPublicOptions() {
static const std::vector<std::string> publicOptions = {
KEY_LOG_LEVEL,
VPU_CONFIG_KEY(LOG_LEVEL),
};
return publicOptions;
}
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, IncorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(inconfigs)),
IncorrectConfigTests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectConfigPublicOptionsTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(getPublicOptions())));
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, IncorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(multiinconfigs)),
IncorrectConfigTests::getTestCaseName);
const std::vector<std::map<std::string, std::string>> Inconf = {
{{"some_nonexistent_key", "some_unknown_value"}}
const std::vector<std::string>& getPrivateOptions() {
static const std::vector<std::string> privateOptions = {
InferenceEngine::MYRIAD_COPY_OPTIMIZATION,
};
return privateOptions;
}
const std::vector<std::map<std::string, std::string>> multiInconf = {
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES , CommonTestUtils::DEVICE_MYRIAD},
{"some_nonexistent_key", "some_unknown_value"}}
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectConfigPrivateOptionsTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(getPrivateOptions())));
const std::vector<std::map<std::string, std::string>>& getIncorrectConfigs() {
static const std::vector<std::map<std::string, std::string>> incorrectConfigs = {
{{KEY_LOG_LEVEL, "INCORRECT_LOG_LEVEL"}},
{{InferenceEngine::MYRIAD_COPY_OPTIMIZATION, "ON"}},
{{InferenceEngine::MYRIAD_COPY_OPTIMIZATION, "OFF"}},
{{InferenceEngine::MYRIAD_PROTOCOL, "BLUETOOTH"}},
{{InferenceEngine::MYRIAD_PROTOCOL, "LAN"}},
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, "OFF"}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, "OFF"}},
{{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "-10"}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, "OFF"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "Two"}},
{{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "SINGLE"}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, "ON"}},
{{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, "OFF"}},
// Deprecated
{{VPU_CONFIG_KEY(LOG_LEVEL), "INCORRECT_LOG_LEVEL"}},
{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), "BLUETOOTH"}},
{{VPU_MYRIAD_CONFIG_KEY(PROTOCOL), "LAN"}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), "ON"}},
{{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), "OFF"}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), "ON"}},
{{VPU_MYRIAD_CONFIG_KEY(FORCE_RESET), "OFF"}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), "ON"}},
{{VPU_CONFIG_KEY(PRINT_RECEIVE_TENSOR_TIME), "OFF"}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "-1"}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "0"}},
{{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "1"}},
{
{KEY_LOG_LEVEL, LOG_INFO},
{InferenceEngine::MYRIAD_COPY_OPTIMIZATION, "ON"},
{InferenceEngine::MYRIAD_PROTOCOL, "BLUETOOTH"},
{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, CONFIG_VALUE(YES)},
{InferenceEngine::MYRIAD_ENABLE_FORCE_RESET, "ON"},
{InferenceEngine::MYRIAD_TILING_CMX_LIMIT_KB, "10"},
{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, "OFF"},
{InferenceEngine::MYRIAD_THROUGHPUT_STREAMS, "1"},
{InferenceEngine::MYRIAD_ENABLE_WEIGHTS_ANALYSIS, "ON"},
},
};
return incorrectConfigs;
}
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, IncorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(getIncorrectConfigs())),
IncorrectConfigTests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, IncorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(Inconf)),
IncorrectConfigAPITests::getTestCaseName);
const std::vector<std::map<std::string, std::string>>& getIncorrectMultiConfigs() {
static const std::vector<std::map<std::string, std::string>> incorrectMultiConfigs = {
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{KEY_LOG_LEVEL, "INCORRECT_LOG_LEVEL"},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{InferenceEngine::MYRIAD_ENABLE_HW_ACCELERATION, "ON"},
},
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, IncorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(multiInconf)),
IncorrectConfigAPITests::getTestCaseName);
const std::vector<std::map<std::string, std::string>> conf = {
{}
// Deprecated
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_CONFIG_KEY(LOG_LEVEL), "INCORRECT_LOG_LEVEL"},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_CONFIG_KEY(HW_STAGES_OPTIMIZATION), "ON"},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "-1"},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "0"},
},
{
{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES, CommonTestUtils::DEVICE_MYRIAD},
{VPU_MYRIAD_CONFIG_KEY(PLATFORM), "1"},
},
};
return incorrectMultiConfigs;
}
const std::vector<std::map<std::string, std::string>> multiconf = {
{{InferenceEngine::MultiDeviceConfigParams::KEY_MULTI_DEVICE_PRIORITIES , CommonTestUtils::DEVICE_MYRIAD}}
};
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, IncorrectConfigTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(getIncorrectMultiConfigs())),
IncorrectConfigTests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::ValuesIn(conf)),
CorrectConfigAPITests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, IncorrectConfigSingleOptionTests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::Values("INCORRECT_KEY")));
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, CorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::Values(std::map<std::string, std::string>{})),
CorrectConfigAPITests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, CorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(getCorrectMultiConfigs())),
CorrectConfigAPITests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_BehaviorTests, IncorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MYRIAD),
::testing::Values(std::map<std::string, std::string>{{"INCORRECT_KEY", "INCORRECT_VALUE"}})),
IncorrectConfigAPITests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, IncorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(getPrecisions()),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(getIncorrectMultiConfigs())),
IncorrectConfigAPITests::getTestCaseName);
INSTANTIATE_TEST_CASE_P(smoke_Multi_BehaviorTests, CorrectConfigAPITests,
::testing::Combine(
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_MULTI),
::testing::ValuesIn(multiconf)),
CorrectConfigAPITests::getTestCaseName);
} // namespace

View File

@ -27,11 +27,45 @@
#include "ngraph_functions/pass/convert_prc.hpp"
namespace BehaviorTestsUtils {
typedef std::tuple<
InferenceEngine::Precision, // Network precision
std::string, // Device name
std::map<std::string, std::string> // Config
> BehaviorParams;
using BehaviorParamsEmptyConfig = std::tuple<
InferenceEngine::Precision, // Network precision
std::string // Device name
>;
class BehaviorTestsEmptyConfig : public testing::WithParamInterface<BehaviorParamsEmptyConfig>,
public CommonTestUtils::TestsCommon {
public:
static std::string getTestCaseName(testing::TestParamInfo<BehaviorParamsEmptyConfig> obj) {
InferenceEngine::Precision netPrecision;
std::string targetDevice;
std::tie(netPrecision, targetDevice) = obj.param;
std::ostringstream result;
result << "netPRC=" << netPrecision.name() << "_";
result << "targetDevice=" << targetDevice;
return result.str();
}
void SetUp() override {
std::tie(netPrecision, targetDevice) = this->GetParam();
function = ngraph::builder::subgraph::makeConvPoolRelu();
}
void TearDown() override {
function.reset();
}
std::shared_ptr<InferenceEngine::Core> ie = PluginCache::get().ie();
std::shared_ptr<ngraph::Function> function;
InferenceEngine::Precision netPrecision;
std::string targetDevice;
};
typedef std::tuple<
InferenceEngine::Precision, // Network precision
std::string, // Device name
std::map<std::string, std::string> // Config
> BehaviorParams;
class BehaviorTestsBasic : public testing::WithParamInterface<BehaviorParams>,
public CommonTestUtils::TestsCommon {
@ -71,4 +105,86 @@ public:
std::map<std::string, std::string> configuration;
};
using BehaviorParamsSingleOption = std::tuple<
InferenceEngine::Precision, // Network precision
std::string, // Device name
std::string // Key
>;
class BehaviorTestsSingleOption : public testing::WithParamInterface<BehaviorParamsSingleOption>,
public CommonTestUtils::TestsCommon {
public:
void SetUp() override {
std::tie(netPrecision, targetDevice, key) = this->GetParam();
function = ngraph::builder::subgraph::makeConvPoolRelu();
}
void TearDown() override {
function.reset();
}
std::shared_ptr<InferenceEngine::Core> ie = PluginCache::get().ie();
std::shared_ptr<ngraph::Function> function;
InferenceEngine::Precision netPrecision;
std::string targetDevice;
std::string key;
};
using BehaviorParamsSingleOptionDefault = std::tuple<
InferenceEngine::Precision, // Network precision
std::string, // Device name
std::pair<std::string, InferenceEngine::Parameter> // Configuration key and its default value
>;
class BehaviorTestsSingleOptionDefault : public testing::WithParamInterface<BehaviorParamsSingleOptionDefault>,
public CommonTestUtils::TestsCommon {
public:
void SetUp() override {
std::pair<std::string, InferenceEngine::Parameter> entry;
std::tie(netPrecision, targetDevice, entry) = this->GetParam();
std::tie(key, value) = entry;
function = ngraph::builder::subgraph::makeConvPoolRelu();
}
void TearDown() override {
function.reset();
}
std::shared_ptr<InferenceEngine::Core> ie = PluginCache::get().ie();
std::shared_ptr<ngraph::Function> function;
InferenceEngine::Precision netPrecision;
std::string targetDevice;
std::string key;
InferenceEngine::Parameter value;
};
using BehaviorParamsSingleOptionCustom = std::tuple<
InferenceEngine::Precision, // Network precision
std::string, // Device name
std::tuple<std::string, std::string, InferenceEngine::Parameter> // Configuration key, value and reference
>;
class BehaviorTestsSingleOptionCustom : public testing::WithParamInterface<BehaviorParamsSingleOptionCustom>,
public CommonTestUtils::TestsCommon {
public:
void SetUp() override {
std::tuple<std::string, std::string, InferenceEngine::Parameter> entry;
std::tie(netPrecision, targetDevice, entry) = this->GetParam();
std::tie(key, value, reference) = entry;
function = ngraph::builder::subgraph::makeConvPoolRelu();
}
void TearDown() override {
function.reset();
}
std::shared_ptr<InferenceEngine::Core> ie = PluginCache::get().ie();
std::shared_ptr<ngraph::Function> function;
InferenceEngine::Precision netPrecision;
std::string targetDevice;
std::string key;
std::string value;
InferenceEngine::Parameter reference;
};
} // namespace BehaviorTestsUtils

View File

@ -27,9 +27,11 @@
#include "ngraph_functions/subgraph_builders.hpp"
namespace BehaviorTestsDefinitions {
using CorrectConfigTests = BehaviorTestsUtils::BehaviorTestsBasic;
using EmptyConfigTests = BehaviorTestsUtils::BehaviorTestsEmptyConfig;
// Setting empty config doesn't throw
TEST_P(CorrectConfigTests, SetEmptyConfig) {
TEST_P(EmptyConfigTests, SetEmptyConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
@ -39,6 +41,29 @@ namespace BehaviorTestsDefinitions {
ASSERT_NO_THROW(ie->SetConfig(config, targetDevice));
}
TEST_P(EmptyConfigTests, CanLoadNetworkWithEmptyConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
std::map<std::string, std::string> config;
ASSERT_NO_THROW(ie->GetMetric(targetDevice, METRIC_KEY(SUPPORTED_CONFIG_KEYS)));
ASSERT_NO_THROW(ie->LoadNetwork(cnnNet, targetDevice, config));
}
using CorrectSingleOptionDefaultValueConfigTests = BehaviorTestsUtils::BehaviorTestsSingleOptionDefault;
TEST_P(CorrectSingleOptionDefaultValueConfigTests, CheckDefaultValueOfConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
ASSERT_NO_THROW(ie->GetMetric(targetDevice, METRIC_KEY(SUPPORTED_CONFIG_KEYS)));
ASSERT_EQ(ie->GetConfig(targetDevice, key), value);
}
using CorrectConfigTests = BehaviorTestsUtils::BehaviorTestsBasic;
// Setting correct config doesn't throw
TEST_P(CorrectConfigTests, SetCorrectConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
@ -49,6 +74,53 @@ namespace BehaviorTestsDefinitions {
ASSERT_NO_THROW(ie->SetConfig(configuration, targetDevice));
}
TEST_P(CorrectConfigTests, CanLoadNetworkWithCorrectConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
ASSERT_NO_THROW(ie->LoadNetwork(cnnNet, targetDevice, configuration));
}
using CorrectSingleOptionCustomValueConfigTests = BehaviorTestsUtils::BehaviorTestsSingleOptionCustom;
TEST_P(CorrectSingleOptionCustomValueConfigTests, CheckCustomValueOfConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
ASSERT_NO_THROW(ie->GetMetric(targetDevice, METRIC_KEY(SUPPORTED_CONFIG_KEYS)));
std::map<std::string, std::string> configuration = {{key, value}};
ASSERT_NO_THROW(ie->SetConfig(configuration, targetDevice));
ASSERT_EQ(ie->GetConfig(targetDevice, key), reference);
}
using CorrectConfigPublicOptionsTests = BehaviorTestsUtils::BehaviorTestsSingleOption;
TEST_P(CorrectConfigPublicOptionsTests, CanSeePublicOption) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
InferenceEngine::Parameter metric;
ASSERT_NO_THROW(metric = ie->GetMetric(targetDevice, METRIC_KEY(SUPPORTED_CONFIG_KEYS)));
const auto& supportedOptions = metric.as<std::vector<std::string>>();
ASSERT_NE(std::find(supportedOptions.cbegin(), supportedOptions.cend(), key), supportedOptions.cend());
}
using CorrectConfigPrivateOptionsTests = BehaviorTestsUtils::BehaviorTestsSingleOption;
TEST_P(CorrectConfigPrivateOptionsTests, CanNotSeePrivateOption) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
InferenceEngine::Parameter metric;
ASSERT_NO_THROW(metric = ie->GetMetric(targetDevice, METRIC_KEY(SUPPORTED_CONFIG_KEYS)));
const auto& supportedOptions = metric.as<std::vector<std::string>>();
ASSERT_EQ(std::find(supportedOptions.cbegin(), supportedOptions.cend(), key), supportedOptions.cend());
}
using IncorrectConfigTests = BehaviorTestsUtils::BehaviorTestsBasic;
TEST_P(IncorrectConfigTests, SetConfigWithIncorrectKey) {
@ -67,7 +139,7 @@ namespace BehaviorTestsDefinitions {
}
}
TEST_P(IncorrectConfigTests, canNotLoadNetworkWithIncorrectConfig) {
TEST_P(IncorrectConfigTests, CanNotLoadNetworkWithIncorrectConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
@ -80,6 +152,17 @@ namespace BehaviorTestsDefinitions {
}
}
using IncorrectConfigSingleOptionTests = BehaviorTestsUtils::BehaviorTestsSingleOption;
TEST_P(IncorrectConfigSingleOptionTests, CanNotGetConfigWithIncorrectConfig) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
InferenceEngine::CNNNetwork cnnNet(function);
ASSERT_NO_THROW(ie->GetMetric(targetDevice, METRIC_KEY(SUPPORTED_CONFIG_KEYS)));
ASSERT_THROW(ie->GetConfig(targetDevice, key), InferenceEngine::Exception);
}
using IncorrectConfigAPITests = BehaviorTestsUtils::BehaviorTestsBasic;
TEST_P(IncorrectConfigAPITests, SetConfigWithNoExistingKey) {
@ -99,7 +182,7 @@ namespace BehaviorTestsDefinitions {
using CorrectConfigAPITests = BehaviorTestsUtils::BehaviorTestsBasic;
TEST_P(CorrectConfigAPITests, canSetExclusiveAsyncRequests) {
TEST_P(CorrectConfigAPITests, CanSetExclusiveAsyncRequests) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
@ -130,7 +213,7 @@ namespace BehaviorTestsDefinitions {
}
}
TEST_P(CorrectConfigAPITests, withoutExclusiveAsyncRequests) {
TEST_P(CorrectConfigAPITests, WithoutExclusiveAsyncRequests) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
// Create CNNNetwork from ngrpah::Function
@ -159,7 +242,7 @@ namespace BehaviorTestsDefinitions {
}
}
TEST_P(CorrectConfigAPITests, reusableCPUStreamsExecutor) {
TEST_P(CorrectConfigAPITests, ReusableCPUStreamsExecutor) {
// Skip test according to plugin specific disabledTestPatterns() (if any)
SKIP_IF_CURRENT_TEST_IS_DISABLED()
ASSERT_EQ(0u, InferenceEngine::ExecutorManager::getInstance()->getExecutorsNumber());
@ -200,4 +283,4 @@ namespace BehaviorTestsDefinitions {
ASSERT_EQ(0u, InferenceEngine::ExecutorManager::getInstance()->getIdleCPUStreamsExecutorsNumber());
}
}
} // namespace BehaviorTestsDefinitions
} // namespace BehaviorTestsDefinitions

View File

@ -9,6 +9,8 @@
#include <ie_blob.h>
#include <blob_factory.hpp>
using namespace InferenceEngine::details;
namespace CommonTestUtils {
bool isDenseBlob(const InferenceEngine::Blob::Ptr& blob) {
@ -69,8 +71,8 @@ void fill_data_with_broadcast(InferenceEngine::Blob::Ptr& blob, InferenceEngine:
if (src_dims[i] != dst_dims[i] && src_dims[i] != 1)
compatible = false;
}
IE_ASSERT(compatible) << "fill_data_with_broadcast error: Tensor shape " << values_dims
<< " can not be broadcasted to shape " << blob_dims;
IE_ASSERT(compatible);
auto fill_strides_like_plain = [] (SizeVector dims) {
SizeVector str(dims.size());

View File

@ -6,6 +6,9 @@
#include <vpu/utils/io.hpp>
#include <vpu/configuration/options/log_level.hpp>
#include <vpu/configuration/options/copy_optimization.hpp>
#include <atomic>
#include <iomanip>
@ -287,6 +290,8 @@ void GraphTransformerTest::SetUp() {
frontEnd = std::make_shared<FrontEnd>(stageBuilder, &_mockCore);
backEnd = std::make_shared<BackEnd>();
passManager = std::make_shared<PassManager>(stageBuilder, backEnd);
config = createConfiguration();
}
void GraphTransformerTest::TearDown() {
@ -301,13 +306,13 @@ void GraphTransformerTest::TearDown() {
void GraphTransformerTest::InitCompileEnv() {
if (const auto envVar = std::getenv("IE_VPU_DUMP_INTERNAL_GRAPH_FILE_NAME")) {
config.dumpInternalGraphFileName = envVar;
config.compileConfig().dumpInternalGraphFileName = envVar;
}
if (const auto envVar = std::getenv("IE_VPU_DUMP_INTERNAL_GRAPH_DIRECTORY")) {
config.dumpInternalGraphDirectory = envVar;
config.compileConfig().dumpInternalGraphDirectory = envVar;
}
if (const auto envVar = std::getenv("IE_VPU_DUMP_ALL_PASSES")) {
config.dumpAllPasses = std::stoi(envVar) != 0;
config.compileConfig().dumpAllPasses = std::stoi(envVar) != 0;
}
CompileEnv::init(platform, config, _log);
@ -342,4 +347,16 @@ TestModel GraphTransformerTest::CreateTestModel() {
return TestModel(CreateModel());
}
PluginConfiguration createConfiguration() {
PluginConfiguration configuration;
configuration.registerOption<LogLevelOption>();
configuration.registerOption<CopyOptimizationOption>();
IE_SUPPRESS_DEPRECATED_START
configuration.registerDeprecatedOption<LogLevelOption>(VPU_CONFIG_KEY(LOG_LEVEL));
IE_SUPPRESS_DEPRECATED_END
return configuration;
}
} // namespace vpu

View File

@ -130,10 +130,12 @@ void checkStageTestInds(const StageRange& stageRange, std::initializer_list<int>
bool checkExecutionOrder(const Model& model, const std::vector<int>& execOrder);
PluginConfiguration createConfiguration();
class GraphTransformerTest : public ::testing::Test {
public:
Platform platform = Platform::MYRIAD_X;
CompilationConfig config;
ncDevicePlatform_t platform = ncDevicePlatform_t::NC_MYRIAD_X;
PluginConfiguration config;
StageBuilder::Ptr stageBuilder;
FrontEnd::Ptr frontEnd;

View File

@ -12,6 +12,8 @@
#include <vpu/graph_transformer.hpp>
#include <vpu/utils/logger.hpp>
#include "graph_transformer_tests.hpp"
#include <ngraph/op/util/attr_types.hpp>
#include <ngraph_functions/subgraph_builders.hpp>
@ -48,9 +50,8 @@ public:
auto fn_ptr = ngraph::builder::subgraph::makeSplitConvConcat();
ASSERT_NO_THROW(_network = InferenceEngine::CNNNetwork(fn_ptr));
CompilationConfig compileConfig;
auto log = std::make_shared<Logger>("GraphCompiler", LogLevel::None, consoleOutput());
_compiledGraph = compileNetwork(_network, Platform::MYRIAD_X, compileConfig, log, &_mockCore);
_compiledGraph = compileNetwork(_network, ncDevicePlatform_t::NC_MYRIAD_X, createConfiguration(), log, &_mockCore);
}
CNNNetwork _network;

View File

@ -24,7 +24,7 @@ class AnnotateMemoryTypes : public GraphTransformerTest, public testing::WithPar
protected:
void SetUp() override {
ASSERT_NO_FATAL_FAILURE(GraphTransformerTest::SetUp());
config.enableMemoryTypesAnnotation = true;
config.compileConfig().enableMemoryTypesAnnotation = true;
ASSERT_NO_FATAL_FAILURE(InitCompileEnv());
ASSERT_NO_FATAL_FAILURE(InitPipeline());

View File

@ -10,7 +10,6 @@
#include <inference_engine.hpp>
#include <ie_plugin_config.hpp>
#include <vpu/vpu_plugin_config.hpp>
#include <vpu/private_plugin_config.hpp>
#include <gna/gna_config.hpp>
#include <common_test_utils/test_assertions.hpp>
#include <memory>
@ -111,4 +110,4 @@ const TestModel convReluNormPoolFcModelQ78 = getConvReluNormPoolFcModel(Inferenc
class FPGAHangingTest : public BehaviorPluginTest {
};
#endif
#endif

View File

@ -30,7 +30,6 @@ std::vector<std::string> MyriadDevicesInfo::getDevicesList(
const ncDeviceProtocol_t deviceProtocol,
const ncDevicePlatform_t devicePlatform,
const XLinkDeviceState_t state) {
deviceDesc_t req_deviceDesc = {};
req_deviceDesc.protocol = convertProtocolToXlink(deviceProtocol);
req_deviceDesc.platform = convertPlatformToXlink(devicePlatform);

View File

@ -30,8 +30,7 @@ public:
std::vector<std::string> getDevicesList(
const ncDeviceProtocol_t deviceProtocol = NC_ANY_PROTOCOL,
const ncDevicePlatform_t devicePlatform = NC_ANY_PLATFORM,
const XLinkDeviceState_t state = X_LINK_ANY_STATE
);
const XLinkDeviceState_t state = X_LINK_ANY_STATE);
inline bool isMyriadXDevice(const std::string &device_name);
inline bool isMyriad2Device(const std::string &device_name);
@ -77,4 +76,4 @@ long MyriadDevicesInfo::getAmountOfUnbootedDevices(const ncDeviceProtocol_t devi
long MyriadDevicesInfo::getAmountOfBootedDevices(const ncDeviceProtocol_t deviceProtocol) {
return getAmountOfDevices(deviceProtocol, NC_ANY_PLATFORM, X_LINK_BOOTED);
}
}

View File

@ -4,6 +4,7 @@
#include "myriad_protocol_case.hpp"
#include "mvnc_ext.h"
#include "vpu/myriad_config.hpp"
void MyriadProtocolTests::SetUp() {
protocol = GetParam();

View File

@ -15,6 +15,7 @@
#include "helpers/myriad_devices.hpp"
#include <cpp/ie_plugin.hpp>
#include <vpu/private_plugin_config.hpp>
using namespace std;
using namespace ::testing;

View File

@ -3,6 +3,7 @@
//
#include "behavior_test_plugin.h"
#include "vpu/myriad_config.hpp"
// correct params
#define BEH_MYRIAD BehTestParams("MYRIAD", \

View File

@ -169,10 +169,15 @@ TEST_F(myriadConfigsWithBlobImportTests_smoke, TryingToSetCompileOptionPrintsWar
std::string content = redirectCoutStream.str();
for (auto &&elem : config) {
// TODO: remove once all options are migrated
std::stringstream deprecatedExpectedMsgStream;
deprecatedExpectedMsgStream << "[Warning][VPU][Config] " << elem.first;
const auto& deprecatedMsg = deprecatedExpectedMsgStream.str();
std::stringstream expectedMsgStream;
expectedMsgStream << "[Warning][VPU][Config] " << elem.first;
std::string msg = expectedMsgStream.str();
ASSERT_TRUE(content.find(msg) != std::string::npos) << msg;
expectedMsgStream << "[Warning][VPU][Configuration] Configuration option \"" << elem.first;
const auto& msg = expectedMsgStream.str();
ASSERT_TRUE(content.find(msg) != std::string::npos || content.find(deprecatedMsg) != std::string::npos) << msg;
}
}

View File

@ -5,6 +5,7 @@
#include "myriad_layers_tests.hpp"
using namespace InferenceEngine;
using namespace InferenceEngine::details;
using myriadConcatTestParams = std::tuple<InferenceEngine::SizeVector, int32_t, InferenceEngine::SizeVector, int32_t, int32_t >;
typedef myriadLayerTestBaseWithParam<myriadConcatTestParams> myriadLayersTestsConcat_smoke;

View File

@ -7,6 +7,8 @@
#include <vpu/utils/logger.hpp>
#include <vpu/compile_env.hpp>
#include <vpu/graph_transformer_internal.hpp>
#include <vpu/configuration/options/log_level.hpp>
#include <vpu/configuration/options/copy_optimization.hpp>
using namespace InferenceEngine;
using namespace vpu;
@ -19,12 +21,12 @@ void graphTransformerFunctionalTests::SetUp() {
vpuLayersTests::SetUp();
_stageBuilder = std::make_shared<StageBuilder>();
_platform = CheckMyriadX() ? Platform::MYRIAD_X : Platform::MYRIAD_2;
_platform = CheckMyriadX() ? ncDevicePlatform_t::NC_MYRIAD_X : ncDevicePlatform_t::NC_MYRIAD_2;
}
void graphTransformerFunctionalTests::CreateModel() {
const auto compilerLog = std::make_shared<Logger>("Test", LogLevel::Info, consoleOutput());
CompileEnv::init(_platform, _compilationConfig, compilerLog);
CompileEnv::init(_platform, _configuration, compilerLog);
AutoScope autoDeinit([] {
CompileEnv::free();
});
@ -43,7 +45,13 @@ void graphTransformerFunctionalTests::CreateModel() {
void graphTransformerFunctionalTests::PrepareGraphCompilation() {
SetSeed(DEFAULT_SEED_VALUE);
_compilationConfig = CompilationConfig();
_configuration.registerOption<LogLevelOption>();
_configuration.registerOption<CopyOptimizationOption>();
IE_SUPPRESS_DEPRECATED_START
_configuration.registerDeprecatedOption<LogLevelOption>(VPU_CONFIG_KEY(LOG_LEVEL));
IE_SUPPRESS_DEPRECATED_END
_inputsInfo.clear();
_outputsInfo.clear();
_inputMap.clear();
@ -87,7 +95,7 @@ int64_t graphTransformerFunctionalTests::CompileAndInfer(Blob::Ptr& inputBlob, B
auto compiledGraph = compileModel(
_gtModel,
_platform,
_compilationConfig,
_configuration,
compilerLog);
std::istringstream instream(std::string(compiledGraph->blob.data(), compiledGraph->blob.size()));

View File

@ -25,12 +25,12 @@ protected:
bool lockLayout = false);
protected:
vpu::ModelPtr _gtModel;
vpu::CompilationConfig _compilationConfig;
vpu::StageBuilder::Ptr _stageBuilder;
vpu::Data _dataIntermediate;
vpu::ModelPtr _gtModel;
vpu::PluginConfiguration _configuration;
vpu::StageBuilder::Ptr _stageBuilder;
vpu::Data _dataIntermediate;
private:
vpu::Platform _platform = vpu::Platform::MYRIAD_X;
ncDevicePlatform_t _platform = ncDevicePlatform_t::NC_MYRIAD_X;
InferenceEngine::ExecutableNetwork _executableNetwork;
};

View File

@ -62,8 +62,8 @@ protected:
const bool usePermuteMerging,
Blob::Ptr& outputBlob) {
PrepareGraphCompilation();
_compilationConfig.detectBatch = false;
_compilationConfig.enablePermuteMerging = usePermuteMerging;
_configuration.compileConfig().detectBatch = false;
_configuration.compileConfig().enablePermuteMerging = usePermuteMerging;
IE_ASSERT(permutationVectors.size() >= 2);

View File

@ -13,7 +13,7 @@
#include "single_layer_common.hpp"
#include "vpu/vpu_plugin_config.hpp"
#include <graph_transformer/include/vpu/private_plugin_config.hpp>
#include <vpu/private_plugin_config.hpp>
using config_t = std::map<std::string, std::string>;

View File

@ -63,6 +63,7 @@ if (ENABLE_MYRIAD)
engines/vpu/myriad_tests/helpers/*cpp
engines/vpu/myriad_tests/helpers/*h
${IE_MAIN_SOURCE_DIR}/src/vpu/myriad_plugin/*.cpp
${IE_MAIN_SOURCE_DIR}/src/vpu/myriad_plugin/configuration/*.cpp
)
include_directories(
engines/vpu/myriad_tests/helpers

View File

@ -12,6 +12,8 @@
#include <debug.h>
#include "../gna_matcher.hpp"
using namespace InferenceEngine::details;
typedef struct {
std::vector<size_t> input_shape;
std::vector<size_t> squeeze_indices;

View File

@ -25,8 +25,8 @@ using VPU_AdjustDataLocationTest = GraphTransformerTest;
//
TEST_F(VPU_AdjustDataLocationTest, FlushCMX_TwoSpecialConsumers) {
config.numSHAVEs = 1;
config.numCMXSlices = 1;
config.compileConfig().numSHAVEs = 1;
config.compileConfig().numCMXSlices = 1;
InitCompileEnv();
DataDesc dataDesc1(DataType::FP16, DimsOrder::NCHW, {CMX_SLICE_SIZE / (2 * 2), 1, 2, 1});

View File

@ -24,8 +24,8 @@ TEST_F(VPU_AddVpuScaleTest, CanAddVpuScaleToNetwork) {
InitCompileEnv();
auto& env = CompileEnv::get();
CompilationConfig config{};
config.irWithVpuScalesDir = "/";
auto config = createConfiguration();
config.compileConfig().irWithVpuScalesDir = "/";
env.updateConfig(config);
std::shared_ptr<ngraph::Function> function;
@ -69,8 +69,8 @@ TEST_F(VPU_AddVpuScaleTest, CanAddVpuScaleToNetwork) {
TEST_F(VPU_AddVpuScaleTest, VpuScaleFromIrChangesWeights) {
InitCompileEnv();
const auto& env = CompileEnv::get();
CompilationConfig config{};
config.irWithVpuScalesDir = "/";
auto config = createConfiguration();
config.compileConfig().irWithVpuScalesDir = "/";
env.updateConfig(config);
std::shared_ptr<ngraph::Function> function;

View File

@ -8,6 +8,8 @@
#include <iomanip>
#include <vpu/utils/io.hpp>
#include <vpu/configuration/options/log_level.hpp>
#include <vpu/configuration/options/copy_optimization.hpp>
namespace vpu {
@ -158,6 +160,18 @@ void TestModel::setStageBatchInfo(
}
}
PluginConfiguration createConfiguration() {
PluginConfiguration configuration;
configuration.registerOption<LogLevelOption>();
configuration.registerOption<CopyOptimizationOption>();
IE_SUPPRESS_DEPRECATED_START
configuration.registerDeprecatedOption<LogLevelOption>(VPU_CONFIG_KEY(LOG_LEVEL));
IE_SUPPRESS_DEPRECATED_END
return configuration;
}
void GraphTransformerTest::SetUp() {
ASSERT_NO_FATAL_FAILURE(TestsCommon::SetUp());
@ -170,6 +184,8 @@ void GraphTransformerTest::SetUp() {
frontEnd = std::make_shared<FrontEnd>(stageBuilder, &_mockCore);
backEnd = std::make_shared<BackEnd>();
passManager = std::make_shared<PassManager>(stageBuilder, backEnd);
config = createConfiguration();
}
void GraphTransformerTest::TearDown() {
@ -186,13 +202,13 @@ void GraphTransformerTest::TearDown() {
void GraphTransformerTest::InitCompileEnv() {
if (const auto envVar = std::getenv("IE_VPU_DUMP_INTERNAL_GRAPH_FILE_NAME")) {
config.dumpInternalGraphFileName = envVar;
config.compileConfig().dumpInternalGraphFileName = envVar;
}
if (const auto envVar = std::getenv("IE_VPU_DUMP_INTERNAL_GRAPH_DIRECTORY")) {
config.dumpInternalGraphDirectory = envVar;
config.compileConfig().dumpInternalGraphDirectory = envVar;
}
if (const auto envVar = std::getenv("IE_VPU_DUMP_ALL_PASSES")) {
config.dumpAllPasses = std::stoi(envVar) != 0;
config.compileConfig().dumpAllPasses = std::stoi(envVar) != 0;
}
CompileEnv::init(platform, config, _log);

View File

@ -176,10 +176,12 @@ void CheckStageTestInds(const StageRange& stageRange, std::initializer_list<int>
}
}
PluginConfiguration createConfiguration();
class GraphTransformerTest : public TestsCommon {
public:
Platform platform = Platform::MYRIAD_X;
CompilationConfig config;
ncDevicePlatform_t platform = ncDevicePlatform_t::NC_MYRIAD_X;
PluginConfiguration config;
StageBuilder::Ptr stageBuilder;
FrontEnd::Ptr frontEnd;

View File

@ -91,7 +91,7 @@ TEST_F(VPU_ReplaceDeconvByConvTest, deconvReplacedByConvIfKernelSizeFitsHWUnit)
}
TEST_F(VPU_ReplaceDeconvByConvTest, deconvCannotBeReplacedByConvIfDisabledInConfig) {
config.hwBlackList.insert("deconv");
config.compileConfig().hwBlackList.insert("deconv");
InitCompileEnv();
InitDeconvStage(16, 15);

View File

@ -10,6 +10,8 @@
namespace {
using InferenceEngine::details::operator<<;
struct From {
explicit From(int new_value) : value(new_value) {}
int value;

View File

@ -40,3 +40,6 @@ if (ENABLE_MYRIAD_NO_BOOT)
endif()
set_property(TARGET ${TARGET_NAME} PROPERTY C_STANDARD 99)
# TODO: remove once all options are migrated
ie_developer_export_targets(${TARGET_NAME})

View File

@ -71,6 +71,9 @@ if(NOT WIN32)
${LIBUSB_LIBRARY})
endif()
# TODO: remove once all options are migrated
ie_developer_export_targets(${TARGET_NAME})
if(ENABLE_TESTS AND ENABLE_MYRIAD_MVNC_TESTS)
add_subdirectory(tests)
endif()

View File

@ -300,7 +300,6 @@ TEST_P(MvncOpenDevice, WatchdogShouldResetDeviceWithoutConnection) {
if (availableDevices_ == 0)
GTEST_SKIP() << ncProtocolToStr(_deviceProtocol) << " devices not found";
ncDeviceHandle_t* deviceHandle = nullptr;
std::string deviceName;
deviceDesc_t deviceDescToBoot = {};
deviceDesc_t in_deviceDesc = {};