Remove opset0 support and undesired passes from Interpreter backend (#1469)
* Move evaluate() interface from some OPs to Interpreter * commit * Move shuffle channels reference to OP's evaluate * Add some operations missed in evaluate_node * Fix select references invocation from evaluate_node() * Activation refs (#2) * HardSigmoid * Elu * Selu * Gelu * Move to test runtime * Rollback donwgrade passes delition * Initial batch to space refs * Return opset1_upgrade * WIP: Add space to batch evaluate * Fix space to batch * add evaluates function in evaluates_map (#4) * Add space to batch evaluate * Fix crop in batch to space references * Remove vectors reallocation in evaluates for b2s and s2b * . * Add SpaceToDepth evaluate * Add depth to space evaluate * Remove code duplication depth to space evaluate * Fix some failed layer tests * Ngraph test (#3) * Remove some v0 ops & fix some tests * Fixes BatchNorm * Next * dd * s * Add dot & replace slice refs * d * dkj * Review fixes part 1 * Fixes. Part 2 * Fixes. Part 3 * Enable cells refs in evaluate map * Fix some failed layer tests * Some more fixes * Fix code style (#6) * Tests (#7) * PriorBox * Mod * NormilizeL2 * Update prior_box.hpp * Fix one hot ref call * . * Select (#8) * Select * Fix code style * Fix select messages * ReverseSeq (#9) * ReverseSeq * Select * ExtractImagePatches, Seqence * Fix Code Style * remove extra * Remove etra line@ * Add fake quantize reference * Align convolution layer tests instantiations with updated definition * Disabled some failed LPT tests * Disabled some failed LPT tests * Remove undesired changes * Update unit-test manifests + some code cleanup * Fix code style (#10) * Normalize L2 refs support (from PR #2327) * Fix code style * Apply review comments. Part 1 (#11) * Apply first part of review comments * Update onnx_import.in.cpp * Remove redundant reshape from shuffle_channels evaluate * Decompose GroupConvolution * [IE Ngraph] Fix some operation inheritance (#13) * [IE TESTS] Depth2Space * Space2Depth * ShuffleChannels * Fix ode style * Fix code style * [IE NGraph] Remove decompose op (#14) * . * Fix loosing control dependency in replace_node * Fix loosing control dependency in replace_node * Fix code style * Fix FQ references build on windows * Fix code style * Apply comments (#15) * [Ie Ngraph] Remove using v1::Add * [Ie Ngraph] Remove using v1::Mutliply * [Ie Ngraph] Remove using v1::Subtract * [Ie Ngraph] Remove using v1::Divide * [Ie Ngraph] Remove using v1::Equal * [Ie Ngraph] Remove using v1::Greater * [Ie Ngraph] Remove using v1::Greater_eq * [Ie Ngraph] Remove using v1::Less * [Ie Ngraph] Remove using v1::LessEq * [Ie Ngraph] Remove using operator+ * [Ie Ngraph] Remove using operator/ * [Ie Ngraph] Remove using operator* * [Ie Ngraph] Remove using operator- * Fix code style * Ci (#16) * Fix CentOS compilation * Revert ngraph::op::vo::Multiply removing due to OpenCV * Android fix (#17) * fix failures * Fix code style * Add (#18) * Android fix * Add * Add in opset1 upgrade pass * Add in opset1 upgrade pass * Remove v0::Add, Reverted removing v0::Multiply (#19) * Remove overloaded math operators from PyNgraph * Remove overloaded math operators from PyNgraph * Fix gna tests (#20) * Fix gna tests * Squashed commit of the following: commit565b504c1c
Author: Alexander Zhogov <alexander.zhogov@intel.com> Date: Tue Oct 13 13:27:34 2020 +0300 GitHub CI: Add files_size.yml (#2570) * GitHub CI: Add files_size.yml * Update job name commitab0fb29853
Author: Vladislav Vinogradov <vlad.vinogradov@intel.com> Date: Tue Oct 13 11:37:30 2020 +0300 [IE][BUILD] Fix C5208 warning under Windows (#2628) * C++ feature in C `typedef struct` code. * The warning can be promoted to error in dependent projects. C5208: unnamed class used in typedef name cannot declare members other than non-static data members, member enumerations, or member classes commit15a338e89b
Author: helmutg <helmut@subdivi.de> Date: Mon Oct 12 22:24:24 2020 +0200 add build option USE_SYSTEM_PUGIXML (#2502) It allows skipping inference-engine/thirdparty/pugixml and using the system copy instead. Thanks to @Osse for helping understand cmake scoping rules. Co-authored-by: Helmut Grohne <helmut.grohne@intenta.de> commit7ac8cd8586
Author: Alexander Zhogov <alexander.zhogov@intel.com> Date: Mon Oct 12 19:23:00 2020 +0300 Azure CI: Fix nGraph ONNX commit3a2e33962c
Author: Alexander Zhogov <alexander.zhogov@intel.com> Date: Mon Oct 12 19:20:28 2020 +0300 Azure CI: Disable steps in nGraph ONNX commit5835974fad
Author: azhogov <alexander.zhogov@intel.com> Date: Mon Oct 12 18:46:14 2020 +0300 Azure CI: Add linux_ngraph_onnx.yml * LRN Reference (#21) * Disable failed tests on ia32 * Remove redundant broadcast from MVN ref * Fix missed GatherND in opset_int_tbl + code style * Remove one extra temporary buffer from MVN ref * Merge master (#22) * Leaky relu transformation refactor (#2640) * Refactored LeakyRelu transformation * Added unit test for LeakyRelu transformation + removed duplicate test function valued_const * nGraph implementation of NMS-5 (without `evaluate()`) (#2651) * Written nGraph NMS-5 without evaluate(). * Used NGRAPH_RTTI_DECLARATION. * setupvars.sh: Updated setting pyenv error to warning. (#2663) * Fix itt build (#2662) * Loop-5 operation specification (#2291) The Loop-5 operation specification * Time tests improvements (#2642) * Remove extra functions from run_timetest.py * Add `log.debug` of raw and aggregated statistics in run_timetest.py * Implement storing of models locally for test_timetest.py * Fixed CVS-35316 (#2072) * Extend MO for operation GatherND (#2540) * Extend MO for operation GatherND * Update documentation * Rename GatherNd.py to gathernd.py Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com> * Add hsigmoid op to ngraph (#2647) * [IE CLDNN] Fixes for GatherTree and ReverseSequence (#2660) * ReorgYolo reference implementation (#2384) * Align ReorgYolo to the spec (vector strides -> int stride) * ReorgYolo ref impl * ReorgYolo evaluate method * ReorgYolo tests * Tests update * Style apply * Add some coments * Code refactor * Comment update * Style apply * Build fix, mark evaluate as override * Revert "Align ReorgYolo to the spec (vector strides -> int stride)" * Use int_executable instead of evaluate * Use char* instead of templates * Code refactor * Comment update * Code review comment * Add constructor aligned with spec * Update shape validation * Update attributes tests * Add type_prop tests * Update backend tests * Add single layer tests * Update the spec * Remove wrong transformation test * Add ReorgYolo to evaluates_map * code style Co-authored-by: Evgeny Lazarev <evgeny.lazarev@intel.com> Co-authored-by: Vladimir Gavrilov <vladimir.gavrilov@intel.com> Co-authored-by: Artyom Anokhov <artyom.anokhov@intel.com> Co-authored-by: Andrey Somsikov <andrey.somsikov@intel.com> Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com> Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> Co-authored-by: iliya mironov <iliya.mironov@intel.com> Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com> Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com> * RegionYolo * Apply review comments * Merge remote-tracking branch 'upstream/master' into update_evaluates # Conflicts: # ngraph/core/src/op/mvn.cpp # ngraph/test/backend/fused_op.in.cpp # ngraph/test/runtime/ie/unit_test.manifest # ngraph/test/runtime/interpreter/int_executable.hpp # ngraph/test/runtime/interpreter/opset_int_tbl.hpp # ngraph/test/runtime/interpreter/unit_test.manifest # ngraph/test/runtime/opset0_tbl.hpp * Apply code style * Apply comments * Apply code style * Fix RegionYolo evaluate redefinition * Removed defines from evaluates map * Apply code style * Fix MVN ref * rename select reference argument * Fix code style * Fix Fake Quantize references calculation (#24) * Fix MVN ref * Fix MVN & adding NMS * Fix TI * Temporary relax comparison threshold for FQ SLT * Fix GPU LPT Tests * Add explicit rounding mode seetting in FQ references * Apply code style * Rollback op_is test deletion * Apply code style * Fix merge conflict resolving issues * Apply code style Co-authored-by: Irina Efode <irina.efode@intel.com> Co-authored-by: Anton Zaytsev <anton.zaytsev@intel.com> Co-authored-by: Evgeny Lazarev <evgeny.lazarev@intel.com> Co-authored-by: Vladimir Gavrilov <vladimir.gavrilov@intel.com> Co-authored-by: Artyom Anokhov <artyom.anokhov@intel.com> Co-authored-by: Andrey Somsikov <andrey.somsikov@intel.com> Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com> Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com> Co-authored-by: iliya mironov <iliya.mironov@intel.com> Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com> Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com>
This commit is contained in:
parent
b3124a5c77
commit
6467c64000
@ -1062,7 +1062,7 @@ void convertFunctionToICNNNetwork(const std::shared_ptr<const ::ngraph::Function
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Softmax>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Split>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::VariadicSplit>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::Subtract>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::v1::Subtract>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::Tanh>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::TileIE>>(),
|
||||
std::make_shared<Builder::NodeConverter<::ngraph::op::TensorIterator>>(),
|
||||
|
@ -537,7 +537,7 @@ CNNLayer::Ptr NodeConverter<ngraph::op::v1::Softmax>::createLayer(const std::sha
|
||||
}
|
||||
|
||||
template <>
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::Subtract>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
CNNLayer::Ptr NodeConverter<ngraph::op::v1::Subtract>::createLayer(const std::shared_ptr<ngraph::Node>& layer) const {
|
||||
LayerParams params = {layer->get_friendly_name(), "Eltwise",
|
||||
details::convertPrecision(layer->get_output_element_type(0))};
|
||||
auto res = std::make_shared<InferenceEngine::EltwiseLayer>(params);
|
||||
|
@ -36,10 +36,10 @@ TEST(algebraic_simplification, add_negative_tests) {
|
||||
auto c = make_shared<op::Parameter>(type, shape);
|
||||
auto abs_a = make_shared<op::Abs>(a);
|
||||
auto iconst2 = ngraph::make_constant_from_string("2", type, shape);
|
||||
auto add_a_0 = a + iconst2;
|
||||
auto add_a_0_0 = add_a_0 + iconst2;
|
||||
auto add_b_0 = b + abs_a;
|
||||
auto add_b_0_0 = add_b_0 + abs_a;
|
||||
auto add_a_0 = std::make_shared<ngraph::op::v1::Add>(a, iconst2);
|
||||
auto add_a_0_0 = std::make_shared<ngraph::op::v1::Add>(add_a_0, iconst2);
|
||||
auto add_b_0 = std::make_shared<ngraph::op::v1::Add>(b, abs_a);
|
||||
auto add_b_0_0 = std::make_shared<ngraph::op::v1::Add>(add_b_0, abs_a);
|
||||
|
||||
auto f = std::make_shared<Function>(ngraph::NodeVector{a, b, add_a_0_0, c, add_b_0_0},
|
||||
ParameterVector{a, b, c});
|
||||
@ -63,10 +63,10 @@ TEST(algebraic_simplification, multiply_negative_tests) {
|
||||
auto c = make_shared<op::Parameter>(type, shape);
|
||||
auto abs_a = make_shared<op::Abs>(a);
|
||||
auto iconst2 = ngraph::make_constant_from_string("2", type, shape);
|
||||
auto add_a_0 = a * iconst2;
|
||||
auto add_a_0_0 = add_a_0 * iconst2;
|
||||
auto add_b_0 = b * abs_a;
|
||||
auto add_b_0_0 = add_b_0 * abs_a;
|
||||
auto add_a_0 = make_shared<op::v1::Multiply>(a, iconst2);
|
||||
auto add_a_0_0 = make_shared<op::v1::Multiply>(add_a_0, iconst2);
|
||||
auto add_b_0 = make_shared<op::v1::Multiply>(b, abs_a);
|
||||
auto add_b_0_0 = make_shared<op::v1::Multiply>(add_b_0, abs_a);
|
||||
|
||||
auto f = std::make_shared<Function>(ngraph::NodeVector{a, b, add_a_0_0, c, add_b_0_0},
|
||||
ParameterVector{a, b, c});
|
||||
@ -228,7 +228,7 @@ TEST(algebraic_simplification, log_no_exp) {
|
||||
auto a = make_shared<op::Parameter>(element::f32, Shape{96, 100});
|
||||
auto b = make_shared<op::Parameter>(element::f32, Shape{96, 100});
|
||||
auto abs_a = make_shared<op::Abs>(a);
|
||||
auto div = abs_a / b;
|
||||
auto div = std::make_shared<op::v1::Divide>(abs_a, b);
|
||||
auto log_div = make_shared<op::Log>(div);
|
||||
|
||||
auto neg_inner = make_shared<op::Negative>(log_div);
|
||||
@ -248,7 +248,7 @@ TEST(algebraic_simplification, log_no_divide) {
|
||||
auto a = make_shared<op::Parameter>(element::f32, Shape{96, 100});
|
||||
auto b = make_shared<op::Parameter>(element::f32, Shape{96, 100});
|
||||
auto exp_a = make_shared<op::Exp>(a);
|
||||
auto mul = exp_a * b;
|
||||
auto mul = make_shared<op::v1::Multiply>(exp_a, b);
|
||||
auto log_mul = make_shared<op::Log>(mul);
|
||||
|
||||
auto neg_inner = make_shared<op::Negative>(log_mul);
|
||||
|
@ -48,7 +48,7 @@ protected:
|
||||
auto mem_i = make_shared<op::v0::Constant>(type, shape, 0);
|
||||
auto mem_r = make_shared<op::v3::ReadValue>(mem_i, "id");
|
||||
|
||||
auto mul = make_shared<op::v0::Multiply>(mem_r, input);
|
||||
auto mul = make_shared<op::v1::Multiply>(mem_r, input);
|
||||
auto sig = make_shared<op::v0::Sigmoid>(mul);
|
||||
|
||||
auto fc1_w = make_shared<op::v0::Constant>(type, Shape{C, C}, 1);
|
||||
|
@ -21,15 +21,16 @@ const std::vector<LayerTransformation::Params> trasformationParamValues = {
|
||||
};
|
||||
|
||||
const std::vector<ngraph::builder::subgraph::FakeQuantizeOnData> fakeQuantizeOnDataValues = {
|
||||
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } },
|
||||
{
|
||||
256ul,
|
||||
{ 1ul, 3ul, 1ul, 1ul },
|
||||
{ 0.f, 0.f, 0.f },
|
||||
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
|
||||
{ 0.f, 0.f, 0.f },
|
||||
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
|
||||
},
|
||||
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } }
|
||||
// TODO: Issue 39810
|
||||
// {
|
||||
// 256ul,
|
||||
// { 1ul, 3ul, 1ul, 1ul },
|
||||
// { 0.f, 0.f, 0.f },
|
||||
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
|
||||
// { 0.f, 0.f, 0.f },
|
||||
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
|
||||
// },
|
||||
};
|
||||
|
||||
INSTANTIATE_TEST_CASE_P(smoke_LPT, FuseFakeQuantizeAndScaleShiftTransformation,
|
||||
|
@ -26,7 +26,7 @@ const std::vector<ReshapeTransformationParam> params = {
|
||||
{
|
||||
ngraph::Shape{ 1, 3, 32 },
|
||||
{ 1, 3, 4, 8 },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
},
|
||||
// 4D -> 3D
|
||||
{
|
||||
|
@ -24,27 +24,27 @@ namespace {
|
||||
|
||||
const std::vector<LayerTestsDefinitions::UnsqueezeTransformationParam> params = {
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 0.0, 3.0 },
|
||||
{ 3, 3, 5}
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 0.0, 1.0 },
|
||||
{ 3, 3, 3 }
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 3.0 },
|
||||
{ 3, 4, 5, 6 }
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 0.0, 3.0 },
|
||||
{ 1, 32, 2}
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { -12.8f }, { 12.7f }, { -12.8f }, { 12.7f } },
|
||||
{ 0.0, 1.0 },
|
||||
{ 46, 128, 2 }
|
||||
}
|
||||
|
@ -22,14 +22,15 @@ const std::vector<LayerTransformation::Params> trasformationParamValues = {
|
||||
|
||||
const std::vector<ngraph::builder::subgraph::FakeQuantizeOnData> fakeQuantizeOnDataValues = {
|
||||
{ 256ul, {}, { 0.f }, { 2.55f }, { 0.f }, { 2.55f } },
|
||||
{
|
||||
256ul,
|
||||
{ 1ul, 3ul, 1ul, 1ul },
|
||||
{ 0.f, 0.f, 0.f },
|
||||
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
|
||||
{ 0.f, 0.f, 0.f },
|
||||
{ 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
|
||||
},
|
||||
// TODO: Issue 39810
|
||||
// {
|
||||
// 256ul,
|
||||
// { 1ul, 3ul, 1ul, 1ul },
|
||||
// { 0.f, 0.f, 0.f },
|
||||
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f },
|
||||
// { 0.f, 0.f, 0.f },
|
||||
// { 2.55f / 10.f, 2.55f / 5.f, 2.55f / 2.f }
|
||||
// },
|
||||
};
|
||||
|
||||
INSTANTIATE_TEST_CASE_P(smoke_LPT, FuseFakeQuantizeAndScaleShiftTransformation,
|
||||
|
@ -26,19 +26,19 @@ const std::vector<ReshapeTransformationParam> params = {
|
||||
{
|
||||
ngraph::Shape{ 1, 3, 32 },
|
||||
{ 1, 3, 4, 8 },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
},
|
||||
// 4D -> 3D
|
||||
{
|
||||
ngraph::Shape{ 1, 3, 16, 16 },
|
||||
{ 1, 3, 256 },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
},
|
||||
// 4D -> 2D
|
||||
{
|
||||
ngraph::Shape{ 1, 3, 4, 8 },
|
||||
{ 1, -1 },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
{ 256ul, ngraph::Shape{ 1, 1, 1 }, { 0.f }, { 255.f }, { 0.f }, { 25.5f } },
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -24,27 +24,27 @@ namespace {
|
||||
|
||||
const std::vector<LayerTestsDefinitions::UnsqueezeTransformationParam> params = {
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 0.0, 3.0 },
|
||||
{ 3, 3, 5}
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 0.0, 1.0 },
|
||||
{ 3, 3, 3 }
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 3.0 },
|
||||
{ 3, 4, 5, 6 }
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 0.0, 3.0 },
|
||||
{ 1, 32, 2}
|
||||
},
|
||||
{
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 256ul, ngraph::Shape { 1, 1, 1 }, { 0.f }, { 255.f }, { -128.f }, { 127.f } },
|
||||
{ 0.0, 1.0 },
|
||||
{ 46, 128, 2 }
|
||||
}
|
||||
|
@ -29,13 +29,13 @@ TEST_P(ExecGraphKeepAssignNode, KeepAssignNode) {
|
||||
using std::make_shared;
|
||||
using namespace ngraph::op;
|
||||
|
||||
// Some simple graph with Memory(Assign) node // in read //
|
||||
auto input = make_shared<Parameter>(type, shape); // | \ / //
|
||||
auto mem_i = make_shared<Constant>(type, shape, 0); // | mul //
|
||||
auto mem_r = make_shared<ReadValue>(mem_i, "id"); // | / \ //
|
||||
auto mul = make_shared<Multiply>(mem_r, input); // sum assign //
|
||||
auto mem_w = make_shared<Assign>(mul, "id"); // | //
|
||||
auto sum = make_shared<Add>(mul, input); // out //
|
||||
// Some simple graph with Memory(Assign) node // in read //
|
||||
auto input = make_shared<Parameter>(type, shape); // | \ / //
|
||||
auto mem_i = make_shared<Constant>(type, shape, 0); // | mul //
|
||||
auto mem_r = make_shared<ReadValue>(mem_i, "id"); // | / \ //
|
||||
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input); // sum assign //
|
||||
auto mem_w = make_shared<Assign>(mul, "id"); // | //
|
||||
auto sum = make_shared<ngraph::op::v1::Add>(mul, input); // out //
|
||||
|
||||
mem_w->add_control_dependency(mem_r);
|
||||
sum->add_control_dependency(mem_w);
|
||||
|
@ -198,7 +198,7 @@ void ActivationParamLayerTest::SetUp() {
|
||||
constantsValue = activationDecl.second;
|
||||
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
|
||||
auto params = ngraph::builder::makeParams(ngPrc, {shapes.first});
|
||||
auto activationParams = createActivationParams(ngPrc);
|
||||
auto activationParams = createActivationParams(ngPrc, shapes.second);
|
||||
|
||||
params[0]->set_friendly_name("Input");
|
||||
params.insert(params.end(), activationParams.begin(), activationParams.end());
|
||||
|
@ -43,7 +43,6 @@ std::string BatchToSpaceLayerTest::getTestCaseName(const testing::TestParamInfo<
|
||||
}
|
||||
|
||||
void BatchToSpaceLayerTest::SetUp() {
|
||||
SetRefMode(LayerTestsUtils::RefMode::INTERPRETER_TRANSFORMATIONS);
|
||||
std::vector<size_t> inputShape;
|
||||
std::vector<int64_t> blockShape, cropsBegin, cropsEnd;
|
||||
InferenceEngine::Precision netPrecision;
|
||||
|
@ -26,8 +26,8 @@
|
||||
/**
|
||||
* redefine this seed to reproduce issue with given seed that can be read from gtest logs
|
||||
*/
|
||||
#define BASE_SEED USE_CLOCK_TIME
|
||||
#define NGRAPH_SEED USE_CLOCK_TIME
|
||||
#define BASE_SEED 123
|
||||
#define NGRAPH_SEED 123
|
||||
|
||||
namespace LayerTestsDefinitions {
|
||||
|
||||
@ -85,6 +85,9 @@ void FakeQuantizeLayerTest::SetUp() {
|
||||
inputDataMax = inputArg[1];
|
||||
inputDataResolution = inputArg[2];
|
||||
}
|
||||
if (fqDirectArg.size() != 0) {
|
||||
threshold = (fqDirectArg[3] - fqDirectArg[2]) / levels;
|
||||
}
|
||||
auto ngPrc = FuncTestUtils::PrecisionUtils::convertIE2nGraphPrc(netPrecision);
|
||||
auto params = ngraph::builder::makeParams(ngPrc, {inputShape});
|
||||
auto paramOuts = ngraph::helpers::convert2OutputVector(ngraph::helpers::castOps2Nodes<ngraph::op::Parameter>(params));
|
||||
|
@ -120,7 +120,7 @@ namespace LayerTestsDefinitions {
|
||||
// Body
|
||||
std::shared_ptr<ngraph::Node> Zo = body_params[0];
|
||||
for (int i = 1; i < body_params.size(); ++i) {
|
||||
Zo = body_params[i] + Zo;
|
||||
Zo = std::make_shared<ngraph::op::v1::Add>(body_params[i], Zo);
|
||||
}
|
||||
|
||||
// body_params.insert(body_params.begin(), current_iteration);
|
||||
|
@ -37,8 +37,6 @@ namespace LayerTestsDefinitions {
|
||||
}
|
||||
|
||||
void SelectLayerTest::SetUp() {
|
||||
SetRefMode(LayerTestsUtils::RefMode::CONSTANT_FOLDING);
|
||||
|
||||
std::vector<std::vector<size_t>> inputShapes(numOfInputs);
|
||||
InferenceEngine::Precision inputPrecision;
|
||||
ngraph::op::AutoBroadcastSpec broadcast;
|
||||
|
@ -43,7 +43,6 @@ std::string SpaceToBatchLayerTest::getTestCaseName(const testing::TestParamInfo<
|
||||
}
|
||||
|
||||
void SpaceToBatchLayerTest::SetUp() {
|
||||
SetRefMode(LayerTestsUtils::RefMode::INTERPRETER_TRANSFORMATIONS);
|
||||
std::vector<size_t> inputShape;
|
||||
std::vector<int64_t> blockShape, padsBegin, padsEnd;
|
||||
InferenceEngine::Precision inputPrecision, netPrecision;
|
||||
|
@ -51,7 +51,7 @@ void CascadeConcat::SetUp() {
|
||||
if (multioutput) {
|
||||
auto const_mult = ngraph::builder::makeConstant(ngPrc, ngraph::Shape{1, input1[0][1]+input2[0][1]},
|
||||
std::vector<float>{1.01f});
|
||||
auto mult = std::make_shared<ngraph::op::v0::Multiply>(concat, const_mult);
|
||||
auto mult = std::make_shared<ngraph::op::v1::Multiply>(concat, const_mult);
|
||||
results = ngraph::ResultVector{std::make_shared<ngraph::opset1::Result>(concat2),
|
||||
std::make_shared<ngraph::opset1::Result>(mult)};
|
||||
} else {
|
||||
|
@ -52,7 +52,7 @@ void SoftsignTest::SetUp() {
|
||||
auto abs = std::make_shared<ngraph::op::Abs>(params[0]);
|
||||
auto add = std::make_shared<ngraph::op::PowerIE>(abs, 1, 1, 1);
|
||||
auto power = std::make_shared<ngraph::op::PowerIE>(add, -1, 1, 0);
|
||||
auto mul = std::make_shared<ngraph::op::Multiply>(power, params[0]);
|
||||
auto mul = std::make_shared<ngraph::op::v1::Multiply>(power, params[0]);
|
||||
ngraph::ResultVector results{ std::make_shared<ngraph::op::Result>(mul) };
|
||||
function = std::make_shared<ngraph::Function>(results, params, "SoftSignTest");
|
||||
}
|
||||
@ -75,10 +75,10 @@ std::shared_ptr<ngraph::Function> SoftsignTest::GenerateNgraphFriendlySoftSign()
|
||||
auto params = ngraph::builder::makeParams(ngPrc, { inputShape });
|
||||
auto abs = std::make_shared<ngraph::op::Abs>(params[0]);
|
||||
auto constant_0 = ngraph::builder::makeConstant<float>(ngPrc, inputShape, { 1 });
|
||||
auto add = std::make_shared<ngraph::op::Add>(abs, constant_0);
|
||||
auto add = std::make_shared<ngraph::op::v1::Add>(abs, constant_0);
|
||||
auto constant_1 = ngraph::builder::makeConstant<float>(ngPrc, inputShape, { -1 });
|
||||
auto power = std::make_shared<ngraph::op::Power>(add, constant_1);
|
||||
auto mul = std::make_shared<ngraph::op::Multiply>(power, params[0]);
|
||||
auto power = std::make_shared<ngraph::op::v1::Power>(add, constant_1);
|
||||
auto mul = std::make_shared<ngraph::op::v1::Multiply>(power, params[0]);
|
||||
|
||||
ngraph::ResultVector results{ std::make_shared<ngraph::op::Result>(mul) };
|
||||
return std::make_shared<ngraph::Function>(results, params, "SoftSignTest");
|
||||
|
@ -64,7 +64,7 @@ void SplitConcatMemory::SetUp() {
|
||||
auto spl = std::make_shared<ngraph::op::v1::VariadicSplit>(cnc, axis_c, chunk_c);
|
||||
|
||||
auto one = std::make_shared<ngraph::op::Constant>(ngPrc, ngraph::Shape{}, 1);
|
||||
auto plus = std::make_shared<ngraph::op::Add>(cnc, one, ngraph::op::AutoBroadcastSpec::NUMPY);
|
||||
auto plus = std::make_shared<ngraph::op::v1::Add>(cnc, one, ngraph::op::AutoBroadcastSpec::NUMPY);
|
||||
plus->set_friendly_name("plus_one");
|
||||
|
||||
auto mem_w = std::make_shared<ngraph::op::Assign>(spl->output(1), "id");
|
||||
|
@ -370,17 +370,6 @@ std::vector<std::vector<std::uint8_t>> LayerTestsCommon::CalculateRefs() {
|
||||
// reference inference on device with other options and nGraph function has to be implemented here
|
||||
break;
|
||||
}
|
||||
case INTERPRETER_TRANSFORMATIONS: {
|
||||
auto cloned_function = ngraph::clone_function(*function);
|
||||
|
||||
// todo: add functionality to configure the necessary transformations for each test separately
|
||||
ngraph::pass::Manager m;
|
||||
m.register_pass<ngraph::pass::ConvertSpaceToBatch>();
|
||||
m.register_pass<ngraph::pass::ConvertBatchToSpace>();
|
||||
m.run_passes(cloned_function);
|
||||
expectedOutputs = ngraph::helpers::interpreterFunction(cloned_function, referenceInputs, inType, convertType);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return expectedOutputs;
|
||||
|
@ -126,7 +126,6 @@ typedef std::tuple<
|
||||
|
||||
enum RefMode {
|
||||
INTERPRETER,
|
||||
INTERPRETER_TRANSFORMATIONS,
|
||||
CONSTANT_FOLDING,
|
||||
IE
|
||||
};
|
||||
|
@ -68,7 +68,7 @@ TEST(BF16TransformerTest, KeepMemoryPrecision) {
|
||||
auto mem_r = make_shared<ReadValue>(mem_i, "id");
|
||||
mem_r->set_friendly_name("mem_r");
|
||||
|
||||
auto mul = make_shared<Multiply>(mem_r, input);
|
||||
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input);
|
||||
auto sig = make_shared<Sigmoid>(mul);
|
||||
|
||||
auto fc1_w = make_shared<Constant>(type, Shape{2, 2}, 1);
|
||||
@ -131,7 +131,7 @@ TEST(BF16TransformerTest, DISABLED_KeepMemoryPrecisionWithGEMM) {
|
||||
auto mem_r = make_shared<ReadValue>(mem_i, "id");
|
||||
mem_r->set_friendly_name("mem_r");
|
||||
|
||||
auto mul = make_shared<Multiply>(mem_r, input);
|
||||
auto mul = make_shared<ngraph::op::v1::Multiply>(mem_r, input);
|
||||
auto sig = make_shared<Sigmoid>(mul);
|
||||
|
||||
auto fc1_w = make_shared<Constant>(type, Shape{2, 2}, 1);
|
||||
|
@ -69,7 +69,7 @@ class GNAEltwiseTest : public GNATest<>, public testing::WithParamInterface<GNAE
|
||||
FC2 = std::make_shared<ngraph::op::v1::Reshape>(FC2, reshape_pattern, false);
|
||||
}
|
||||
|
||||
auto add = std::make_shared<ngraph::op::Add>(FC1, FC2);
|
||||
auto add = std::make_shared<ngraph::op::v1::Add>(FC1, FC2);
|
||||
|
||||
auto function = std::make_shared<ngraph::Function>(ngraph::NodeVector{ add }, ngraph::ParameterVector{input1, input2});
|
||||
|
||||
|
@ -24,48 +24,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise addition operation.
|
||||
///
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. Use v1::Add instead of it.")
|
||||
NGRAPH_API Add : public util::BinaryElementwiseArithmetic
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Add", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs an uninitialized addition operation
|
||||
Add()
|
||||
: util::BinaryElementwiseArithmetic(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
|
||||
/// \brief Constructs an addition operation.
|
||||
///
|
||||
/// \param arg0 Output that produces the first input tensor.<br>
|
||||
/// `[d0, ...]`
|
||||
/// \param arg1 Output that produces the second input tensor.<br>
|
||||
/// `[d0, ...]`
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
///
|
||||
/// Output `[d0, ...]`
|
||||
///
|
||||
Add(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise addition operation.
|
||||
@ -99,19 +57,13 @@ namespace ngraph
|
||||
|
||||
std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
|
||||
size_t get_version() const override { return 1; }
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
|
||||
} // namespace v1
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Add;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
} // namespace op
|
||||
|
||||
NGRAPH_DEPRECATED("This operator was deprecated and will be removed with v0 operation.")
|
||||
NGRAPH_API
|
||||
std::shared_ptr<Node> operator+(const Output<Node>& arg0, const Output<Node>& arg1);
|
||||
} // namespace ngraph
|
||||
} // namespace op
|
||||
} // namespace ngraph
|
@ -54,6 +54,8 @@ namespace ngraph
|
||||
const Output<Node>& block_shape,
|
||||
const Output<Node>& crops_begin,
|
||||
const Output<Node>& crops_end);
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
|
||||
void validate_and_infer_types() override;
|
||||
std::shared_ptr<Node>
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include "ngraph/op/op.hpp"
|
||||
#include "ngraph/op/util/attr_types.hpp"
|
||||
#include "ngraph/op/util/fused_op.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
@ -37,7 +38,7 @@ namespace ngraph
|
||||
///
|
||||
/// Output node produces a tensor with shape:
|
||||
/// [N, C/(blocksize * blocksize), H * blocksize, W * blocksize]
|
||||
class NGRAPH_API DepthToSpace : public ngraph::op::util::FusedOp
|
||||
class NGRAPH_API DepthToSpace : public Op
|
||||
{
|
||||
public:
|
||||
NGRAPH_RTTI_DECLARATION;
|
||||
@ -68,10 +69,11 @@ namespace ngraph
|
||||
|
||||
std::size_t get_block_size() const { return m_blocksize; }
|
||||
DepthToSpaceMode get_mode() const { return m_mode; }
|
||||
virtual OutputVector decompose_op() const override;
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
void validate_and_infer_types() override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
|
||||
protected:
|
||||
std::size_t m_blocksize;
|
||||
|
@ -22,57 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise division operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Divide instead of it.") NGRAPH_API Divide
|
||||
: public util::BinaryElementwiseArithmetic
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Divide", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a division operation.
|
||||
Divide()
|
||||
: util::BinaryElementwiseArithmetic(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a division operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param pythondiv Use Python style rounding for integral type
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Divide(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
bool pythondiv,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
/// \brief Constructs a division operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Divide(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
bool is_pythondiv() const { return m_pythondiv; }
|
||||
void set_is_pythondiv(bool pythondiv) { m_pythondiv = pythondiv; }
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
|
||||
protected:
|
||||
bool m_pythondiv{true};
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise division operation.
|
||||
@ -121,13 +70,5 @@ namespace ngraph
|
||||
bool m_pythondiv{true};
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Divide;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
} // namespace op
|
||||
|
||||
NGRAPH_DEPRECATED("This operator was deprecated and will be removed with v0 operation.")
|
||||
NGRAPH_API
|
||||
std::shared_ptr<Node> operator/(const Output<Node>& arg0, const Output<Node>& arg1);
|
||||
} // namespace op
|
||||
} // namespace ngraph
|
||||
|
@ -22,57 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
// clang-format off
|
||||
/// \brief Elementwise is-equal operation.
|
||||
///
|
||||
/// ## Inputs
|
||||
///
|
||||
/// | | Type | Description |
|
||||
/// | ------ | --------------------------------- | ------------------------------------------------------ |
|
||||
/// | `arg0` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of any shape and element type. |
|
||||
/// | `arg1` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of the same shape and element type as `arg0`. |
|
||||
/// | `autob`| AutoBroadcastSpec | Auto broadcast specification. |
|
||||
///
|
||||
/// ## Output
|
||||
///
|
||||
/// | Type | Description |
|
||||
/// | ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
/// | \f$\texttt{bool}[d_1,\dots,d_n]\f$ | The tensor \f$T\f$, where \f$T[i_1,\dots,i_n] = 1\text{ if }\texttt{arg0}[i_1,\dots,i_n] = \texttt{arg1}[i_1,\dots,i_n]\text{, else } 0\f$ |
|
||||
// clang-format on
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Equal instead of it.") NGRAPH_API Equal
|
||||
: public util::BinaryElementwiseComparison
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Equal", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs an equal operation.
|
||||
Equal()
|
||||
: util::BinaryElementwiseComparison(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs an equal operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Equal(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
// clang-format off
|
||||
@ -118,9 +67,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Equal;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -22,40 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise greater-than operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Greater instead of it.") NGRAPH_API Greater
|
||||
: public util::BinaryElementwiseComparison
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Greater", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a greater-than operation.
|
||||
Greater()
|
||||
: util::BinaryElementwiseComparison(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a greater-than operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Greater(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise greater-than operation.
|
||||
@ -84,9 +50,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Greater;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -22,40 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise greater-than-or-equal operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::GreaterEqual instead of it.") NGRAPH_API GreaterEq
|
||||
: public util::BinaryElementwiseComparison
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"GreaterEq", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a greater-than-or-equal operation.
|
||||
GreaterEq()
|
||||
: util::BinaryElementwiseComparison(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a greater-than-or-equal operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
GreaterEq(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise greater-than-or-equal operation.
|
||||
@ -84,9 +50,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::GreaterEq;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -22,40 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise less-than operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Less instead of it.") NGRAPH_API Less
|
||||
: public util::BinaryElementwiseComparison
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Less", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a less-than operation.
|
||||
Less()
|
||||
: util::BinaryElementwiseComparison(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a less-than operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Less(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise less-than operation.
|
||||
@ -84,9 +50,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Less;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -51,43 +51,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise less-than-or-equal operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::LessEqual instead of it.") NGRAPH_API LessEq
|
||||
: public util::BinaryElementwiseComparison
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"LessEq", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a less-than-or-equal operation.
|
||||
LessEq()
|
||||
: util::BinaryElementwiseComparison(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a less-than-or-equal operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
LessEq(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::LessEq;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
} // namespace op
|
||||
} // namespace op
|
||||
} // namespace ngraph
|
||||
|
@ -401,7 +401,7 @@ namespace ngraph
|
||||
|
||||
static constexpr std::size_t s_gates_count{4};
|
||||
};
|
||||
} // v1
|
||||
} // v4
|
||||
} // namespace op
|
||||
|
||||
NGRAPH_API
|
||||
|
@ -22,41 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise maximum operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Maximum instead of it.") NGRAPH_API Maximum
|
||||
: public util::BinaryElementwiseArithmetic
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Maximum", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a maximum operation.
|
||||
Maximum()
|
||||
: util::BinaryElementwiseArithmetic(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a maximum operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Maximum(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise maximum operation.
|
||||
@ -88,9 +53,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Maximum;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -22,41 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise minimum operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Minimum instead of it.") NGRAPH_API Minimum
|
||||
: public util::BinaryElementwiseArithmetic
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Minimum", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a minimum operation.
|
||||
Minimum()
|
||||
: util::BinaryElementwiseArithmetic(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a minimum operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Minimum(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise minimum operation.
|
||||
@ -88,9 +53,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Minimum;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -88,13 +88,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Multiply;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
} // namespace op
|
||||
|
||||
NGRAPH_DEPRECATED("This operator was deprecated and will be removed with v0 operation.")
|
||||
NGRAPH_API
|
||||
std::shared_ptr<Node> operator*(const Output<Node>& arg0, const Output<Node>& arg1);
|
||||
} // namespace op
|
||||
} // namespace ngraph
|
||||
|
@ -22,41 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise not-equal operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::NotEqual instead of it.") NGRAPH_API NotEqual
|
||||
: public util::BinaryElementwiseComparison
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"NotEqual", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a not-equal operation.
|
||||
NotEqual()
|
||||
: util::BinaryElementwiseComparison(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs a not-equal operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
NotEqual(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise not-equal operation.
|
||||
@ -86,9 +51,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::NotEqual;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -31,7 +31,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
NGRAPH_OP(Abs, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Acos, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Acosh, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(Add, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Add, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Asin, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Asinh, ngraph::op::v3, 3)
|
||||
@ -60,13 +59,11 @@ NGRAPH_OP(DeformableConvolution, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(DeformablePSROIPooling, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(DepthToSpace, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(DetectionOutput, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Divide, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Divide, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Elu, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(EmbeddingBagOffsetsSum, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(EmbeddingBagPackedSum, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(EmbeddingSegmentsSum, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(Equal, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Equal, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Erf, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Exp, ngraph::op::v0, 0)
|
||||
@ -80,9 +77,7 @@ NGRAPH_OP(Gather, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(GatherND, ngraph::op::v5, 5)
|
||||
NGRAPH_OP(GatherTree, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Gelu, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Greater, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Greater, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(GreaterEq, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(GreaterEqual, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(GroupConvolution, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(GroupConvolutionBackpropData, ngraph::op::v1, 1)
|
||||
@ -92,9 +87,7 @@ NGRAPH_OP(Interpolate, ngraph::op::v4, 4)
|
||||
NGRAPH_OP(LRN, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(LSTMCell, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(LSTMSequence, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Less, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Less, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(LessEq, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(LessEqual, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Log, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(LogicalAnd, ngraph::op::v1, 1)
|
||||
@ -104,26 +97,21 @@ NGRAPH_OP(LogicalXor, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(MVN, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(MatMul, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(MaxPool, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Maximum, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Maximum, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Minimum, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Minimum, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Mod, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Multiply, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Multiply, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Negative, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(NonMaxSuppression, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(NonMaxSuppression, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(NonZero, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(NormalizeL2, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(NotEqual, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(NotEqual, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(OneHot, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(PRelu, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(PSROIPooling, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Pad, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Parameter, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Power, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Power, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(PriorBox, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(PriorBoxClustered, ngraph::op::v0, 0)
|
||||
@ -150,7 +138,6 @@ NGRAPH_OP(Round, ngraph::op::v5, 5)
|
||||
NGRAPH_OP(ROIAlign, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(ScatterElementsUpdate, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(ScatterUpdate, ngraph::op::v3, 3)
|
||||
NGRAPH_OP(Select, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Select, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Selu, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(ShapeOf, ngraph::op::v0, 0)
|
||||
@ -168,7 +155,6 @@ NGRAPH_OP(Sqrt, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(SquaredDifference, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Squeeze, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(StridedSlice, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Subtract, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Subtract, ngraph::op::v1, 1)
|
||||
NGRAPH_OP(Tan, ngraph::op::v0, 0)
|
||||
NGRAPH_OP(Tanh, ngraph::op::v0, 0)
|
||||
|
@ -22,54 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
// clang-format off
|
||||
/// \brief Elementwise exponentiation operation.
|
||||
///
|
||||
/// ## Inputs
|
||||
///
|
||||
/// | | Type | Description |
|
||||
/// | ------ | --------------------------------- | ------------------------------------------------------ |
|
||||
/// | `arg0` | \f$N[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of any shape and numeric element type. |
|
||||
/// | `arg1` | \f$N[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of the same shape and element type as `arg0`. |
|
||||
///
|
||||
/// ## Output
|
||||
///
|
||||
/// | Type | Description |
|
||||
/// | ---------------------- | -------------------------------------------------------------------------------------------------------------- |
|
||||
/// | \f$N[d_1,\dots,d_n]\f$ | The tensor \f$T\f$, where \f$T[i_1,\dots,i_n] = \texttt{arg0}[i_1,\dots,i_n]^{\texttt{arg1}[i_1,\dots,i_n]}\f$ |
|
||||
// clang-format on
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Power instead of it.") NGRAPH_API Power
|
||||
: public util::BinaryElementwiseArithmetic
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Power", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
Power()
|
||||
: util::BinaryElementwiseArithmetic(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
/// \brief Constructs an exponentiation operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Power(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
// clang-format off
|
||||
@ -114,9 +66,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Power;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
}
|
||||
|
@ -22,51 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
// clang-format off
|
||||
/// \brief Elementwise selection operation.
|
||||
///
|
||||
/// ## Inputs
|
||||
///
|
||||
/// | | Type | Description |
|
||||
/// | ------ | --------------------------------------------- | ------------------------------------------------------------ |
|
||||
/// | `arg0` | \f$\texttt{bool}[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of any shape, with element `bool`. |
|
||||
/// | `arg1` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of the same shape as `arg0`, with any element type. |
|
||||
/// | `arg2` | \f$E[d_1,\dots,d_n]~(n \geq 0)\f$ | A tensor of the same shape and element type as `arg1`. |
|
||||
///
|
||||
/// ## Output
|
||||
///
|
||||
/// | Type | Description |
|
||||
/// | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
/// | \f$E[d_1,\dots,d_n]\f$ | The tensor \f$T\f$, where \f$T[i_1,\dots,i_n] = \texttt{arg1}[i_1,\dots,i_n]\text{ if }\texttt{arg0}[i_1,\dots,i_n] \neq 0\text{, else }\texttt{arg2}[i_1,\dots,i_n]\f$ |
|
||||
// clang-format on
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Select instead of it.") NGRAPH_API Select : public Op
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Select", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
/// \brief Constructs a selection operation.
|
||||
Select() = default;
|
||||
/// \brief Constructs a selection operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param arg2 Node that produces the third input tensor.
|
||||
Select(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const Output<Node>& arg2);
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
void validate_and_infer_types() override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
// clang-format off
|
||||
@ -129,8 +84,5 @@ namespace ngraph
|
||||
AutoBroadcastSpec m_auto_broadcast;
|
||||
};
|
||||
} // namespace v1
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Select;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
} // namespace op
|
||||
} // namespace op
|
||||
} // namespace ngraph
|
||||
|
@ -30,7 +30,7 @@ namespace ngraph
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Permutes data in the channel dimension of the input
|
||||
class NGRAPH_API ShuffleChannels : public ngraph::op::util::FusedOp
|
||||
class NGRAPH_API ShuffleChannels : public Op
|
||||
{
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"ShuffleChannels", 0};
|
||||
@ -53,15 +53,16 @@ namespace ngraph
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
size_t get_zero_based_axis() const;
|
||||
|
||||
virtual void pre_validate_and_infer_types() override;
|
||||
|
||||
virtual OutputVector decompose_op() const override;
|
||||
virtual void validate_and_infer_types() override;
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
int64_t get_axis() const { return m_axis; }
|
||||
int64_t get_group() const { return m_group; }
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
|
||||
private:
|
||||
/// \brief Generates a shape required to permute the data
|
||||
///
|
||||
|
@ -60,6 +60,9 @@ namespace ngraph
|
||||
std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
}
|
||||
using v1::SpaceToBatch;
|
||||
|
@ -18,6 +18,7 @@
|
||||
|
||||
#include "ngraph/node.hpp"
|
||||
#include "ngraph/op/util/fused_op.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
@ -34,7 +35,7 @@ namespace ngraph
|
||||
///
|
||||
/// Output node produces a tensor with shape:
|
||||
/// [N, C * blocksize * blocksize, H / blocksize, W / blocksize]
|
||||
class NGRAPH_API SpaceToDepth : public ngraph::op::util::FusedOp
|
||||
class NGRAPH_API SpaceToDepth : public Op
|
||||
{
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"SpaceToDepth", 0};
|
||||
@ -65,11 +66,13 @@ namespace ngraph
|
||||
bool visit_attributes(AttributeVisitor& visitor) override;
|
||||
std::size_t get_block_size() const { return m_blocksize; }
|
||||
SpaceToDepthMode get_mode() const { return m_mode; }
|
||||
virtual OutputVector decompose_op() const override;
|
||||
|
||||
void validate_and_infer_types() override;
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
|
||||
protected:
|
||||
std::size_t m_blocksize;
|
||||
SpaceToDepthMode m_mode;
|
||||
|
@ -22,42 +22,6 @@ namespace ngraph
|
||||
{
|
||||
namespace op
|
||||
{
|
||||
namespace v0
|
||||
{
|
||||
/// \brief Elementwise subtraction operation.
|
||||
class NGRAPH_DEPRECATED(
|
||||
"This operation is deprecated and will be removed soon. "
|
||||
"Use v1::Subtract instead of it.") NGRAPH_API Subtract
|
||||
: public util::BinaryElementwiseArithmetic
|
||||
{
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
public:
|
||||
static constexpr NodeTypeInfo type_info{"Subtract", 0};
|
||||
const NodeTypeInfo& get_type_info() const override { return type_info; }
|
||||
Subtract()
|
||||
: util::BinaryElementwiseArithmetic(AutoBroadcastSpec::NONE)
|
||||
{
|
||||
}
|
||||
|
||||
/// \brief Constructs a subtraction operation.
|
||||
///
|
||||
/// \param arg0 Node that produces the first input tensor.
|
||||
/// \param arg1 Node that produces the second input tensor.
|
||||
/// \param auto_broadcast Auto broadcast specification
|
||||
Subtract(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast = AutoBroadcastSpec());
|
||||
|
||||
virtual std::shared_ptr<Node>
|
||||
clone_with_new_inputs(const OutputVector& new_args) const override;
|
||||
|
||||
bool evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const override;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
};
|
||||
|
||||
} // namespace v0
|
||||
|
||||
namespace v1
|
||||
{
|
||||
/// \brief Elementwise subtraction operation.
|
||||
@ -87,14 +51,5 @@ namespace ngraph
|
||||
const HostTensorVector& inputs) const override;
|
||||
};
|
||||
} // namespace v1
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using v0::Subtract;
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
} // namespace op
|
||||
|
||||
NGRAPH_DEPRECATED("This operator was deprecated and will be removed with v0 operation.")
|
||||
NGRAPH_API
|
||||
std::shared_ptr<ngraph::Node> operator-(const Output<ngraph::Node> arg0,
|
||||
const Output<ngraph::Node> arg1);
|
||||
} // namespace op
|
||||
} // namespace ngraph
|
||||
|
@ -388,21 +388,25 @@ namespace ngraph
|
||||
Shape arg1_padded_shape = arg1_shape;
|
||||
Shape arg2_padded_shape = arg2_shape;
|
||||
|
||||
while (arg1_padded_shape.size() < arg2_padded_shape.size())
|
||||
size_t max_shape_size = std::max({arg0_padded_shape.size(),
|
||||
arg1_padded_shape.size(),
|
||||
arg2_padded_shape.size()});
|
||||
|
||||
while (arg0_padded_shape.size() < max_shape_size)
|
||||
{
|
||||
arg0_padded_shape.insert(arg0_padded_shape.begin(), 1);
|
||||
}
|
||||
|
||||
while (arg1_padded_shape.size() < max_shape_size)
|
||||
{
|
||||
arg1_padded_shape.insert(arg1_padded_shape.begin(), 1);
|
||||
}
|
||||
|
||||
while (arg2_padded_shape.size() < arg1_padded_shape.size())
|
||||
while (arg2_padded_shape.size() < max_shape_size)
|
||||
{
|
||||
arg2_padded_shape.insert(arg2_padded_shape.begin(), 1);
|
||||
}
|
||||
|
||||
while (arg0_padded_shape.size() < arg1_padded_shape.size())
|
||||
{
|
||||
arg0_padded_shape.insert(arg0_padded_shape.begin(), 1);
|
||||
}
|
||||
|
||||
Shape arg0_squeezed_shape;
|
||||
Shape arg1_squeezed_shape;
|
||||
Shape arg2_squeezed_shape;
|
||||
@ -411,7 +415,7 @@ namespace ngraph
|
||||
AxisSet arg2_squeezed_axes;
|
||||
Shape output_shape;
|
||||
|
||||
for (size_t i = 0; i < arg1_padded_shape.size(); i++)
|
||||
for (size_t i = 0; i < max_shape_size; i++)
|
||||
{
|
||||
if (arg1_padded_shape[i] == 1)
|
||||
{
|
||||
@ -440,9 +444,9 @@ namespace ngraph
|
||||
arg0_squeezed_shape.push_back(arg0_padded_shape[i]);
|
||||
}
|
||||
|
||||
output_shape.push_back(arg1_padded_shape[i] == 1
|
||||
? arg2_padded_shape[i]
|
||||
: arg1_padded_shape[i]);
|
||||
output_shape.push_back(std::max({arg0_padded_shape[i],
|
||||
arg2_padded_shape[i],
|
||||
arg1_padded_shape[i]}));
|
||||
}
|
||||
|
||||
CoordinateTransform arg0_transform(arg0_squeezed_shape);
|
||||
|
@ -223,8 +223,8 @@ namespace ngraph
|
||||
|
||||
if (in_bounds || include_padding_in_avg_computation)
|
||||
{
|
||||
T v =
|
||||
in_bounds ? arg[input_batch_transform.index(input_batch_coord)] : 0;
|
||||
T v = in_bounds ? arg[input_batch_transform.index(input_batch_coord)]
|
||||
: static_cast<T>(0);
|
||||
result += v;
|
||||
n_elements++;
|
||||
}
|
||||
|
@ -19,10 +19,13 @@
|
||||
#include <cfenv>
|
||||
#include <cmath>
|
||||
#include <functional>
|
||||
#include <numeric>
|
||||
|
||||
#include "ngraph/axis_vector.hpp"
|
||||
#include "ngraph/coordinate_transform.hpp"
|
||||
#include "ngraph/runtime/reference/concat.hpp"
|
||||
#include "ngraph/runtime/reference/reverse.hpp"
|
||||
#include "ngraph/runtime/reference/split.hpp"
|
||||
#include "ngraph/util.hpp"
|
||||
|
||||
namespace ngraph
|
||||
@ -72,21 +75,8 @@ namespace ngraph
|
||||
size_t filter_out_channel_axis,
|
||||
size_t filter_in_channel_axis,
|
||||
size_t out_batch_axis,
|
||||
size_t out_channel_axis,
|
||||
const float* input_scale = nullptr,
|
||||
const INPUT* input_zero_point = nullptr,
|
||||
const float* filter_scale = nullptr,
|
||||
const FILTER* filter_zero_point = nullptr,
|
||||
const float* output_scale = nullptr,
|
||||
const OUTPUT* output_zero_point = nullptr)
|
||||
size_t out_channel_axis)
|
||||
{
|
||||
bool is_quantized = false;
|
||||
if (input_scale && input_zero_point && filter_scale && filter_zero_point &&
|
||||
output_scale && output_zero_point)
|
||||
{
|
||||
is_quantized = true;
|
||||
}
|
||||
|
||||
auto old_mode = std::fegetround();
|
||||
std::fesetround(FE_TONEAREST);
|
||||
// Comments throughout assume without loss of generality that:
|
||||
@ -236,11 +226,7 @@ namespace ngraph
|
||||
{
|
||||
ACCUMULATION in_v = static_cast<ACCUMULATION>(in[in_idx]);
|
||||
ACCUMULATION f_v = static_cast<ACCUMULATION>(filter[filter_idx]);
|
||||
if (is_quantized)
|
||||
{
|
||||
in_v = in_v - static_cast<ACCUMULATION>(*input_zero_point);
|
||||
f_v = f_v - static_cast<ACCUMULATION>(*filter_zero_point);
|
||||
}
|
||||
|
||||
result += in_v * f_v;
|
||||
in_idx += in_channel_stride;
|
||||
filter_idx += filter_in_channel_stride;
|
||||
@ -249,17 +235,8 @@ namespace ngraph
|
||||
++in_it;
|
||||
++filter_it;
|
||||
}
|
||||
if (is_quantized)
|
||||
{
|
||||
float scale = *input_scale * *filter_scale / *output_scale;
|
||||
out[out_transform.index(out_coord)] =
|
||||
static_cast<OUTPUT>(std::round(static_cast<float>(result) * scale)) +
|
||||
*output_zero_point;
|
||||
}
|
||||
else
|
||||
{
|
||||
out[out_transform.index(out_coord)] = result;
|
||||
}
|
||||
|
||||
out[out_transform.index(out_coord)] = result;
|
||||
}
|
||||
std::fesetround(old_mode);
|
||||
}
|
||||
@ -278,13 +255,7 @@ namespace ngraph
|
||||
const Strides& filter_dilation,
|
||||
const CoordinateDiff& in_pad_below,
|
||||
const CoordinateDiff& in_pad_above,
|
||||
const Strides& in_dilation,
|
||||
const float* input_scale = nullptr,
|
||||
const INPUT* input_zero_point = nullptr,
|
||||
const float* filter_scale = nullptr,
|
||||
const FILTER* filter_zero_point = nullptr,
|
||||
const float* output_scale = nullptr,
|
||||
const OUTPUT* output_zero_point = nullptr)
|
||||
const Strides& in_dilation)
|
||||
|
||||
{
|
||||
general_convolution<INPUT, FILTER, OUTPUT, ACCUMULATION>(in,
|
||||
@ -303,48 +274,7 @@ namespace ngraph
|
||||
0,
|
||||
1,
|
||||
0,
|
||||
1,
|
||||
input_scale,
|
||||
input_zero_point,
|
||||
filter_scale,
|
||||
filter_zero_point,
|
||||
output_scale,
|
||||
output_zero_point);
|
||||
}
|
||||
|
||||
template <typename INPUT,
|
||||
typename OUTPUT,
|
||||
typename FILTER,
|
||||
typename ACCUMULATION = typename widen<FILTER>::type>
|
||||
void convolution_backprop_filter(const INPUT* in,
|
||||
const OUTPUT* delta_out,
|
||||
FILTER* delta_filter,
|
||||
const Shape& in_shape,
|
||||
const Shape& out_shape,
|
||||
const Shape& filter_shape,
|
||||
const Strides& filter_dilation,
|
||||
const Strides& stride,
|
||||
const CoordinateDiff& in_pad_below,
|
||||
const CoordinateDiff& backprop_in_pad_above,
|
||||
const Strides& in_dilation)
|
||||
{
|
||||
general_convolution<INPUT, OUTPUT, FILTER, ACCUMULATION>(in,
|
||||
delta_out,
|
||||
delta_filter,
|
||||
in_shape,
|
||||
out_shape,
|
||||
filter_shape,
|
||||
filter_dilation,
|
||||
stride,
|
||||
in_pad_below,
|
||||
backprop_in_pad_above,
|
||||
in_dilation,
|
||||
1,
|
||||
0,
|
||||
1,
|
||||
0,
|
||||
1,
|
||||
0);
|
||||
1);
|
||||
}
|
||||
|
||||
template <typename OUTPUT,
|
||||
@ -359,15 +289,16 @@ namespace ngraph
|
||||
const Shape& in_shape,
|
||||
const Strides& in_dilation,
|
||||
const Strides& filter_dilation,
|
||||
const CoordinateDiff& backward_delta_out_pad_below,
|
||||
const CoordinateDiff& backward_delta_out_pad_above,
|
||||
const CoordinateDiff& forward_in_pad_bellow,
|
||||
const CoordinateDiff& forward_in_pad_above,
|
||||
const Strides& stride)
|
||||
{
|
||||
// Note that we only reverse the spatial dimensions here (loop
|
||||
// starts at 2)
|
||||
std::vector<INPUT> reversed(shape_size(filter_shape));
|
||||
AxisSet reverse_axes;
|
||||
for (size_t i = 2; i < filter_shape.size(); ++i)
|
||||
size_t reverse_axes_start = 2;
|
||||
for (size_t i = reverse_axes_start; i < filter_shape.size(); ++i)
|
||||
{
|
||||
reverse_axes.insert(i);
|
||||
}
|
||||
@ -377,6 +308,35 @@ namespace ngraph
|
||||
filter_shape,
|
||||
reverse_axes,
|
||||
sizeof(FILTER));
|
||||
size_t filter_out_channel_axis = 1;
|
||||
size_t filter_in_channel_axis = 0;
|
||||
|
||||
// Compute backward pad out pad bellow
|
||||
size_t spatial_dim_count = in_shape.size() - 2;
|
||||
|
||||
CoordinateDiff backward_delta_out_pad_below;
|
||||
backward_delta_out_pad_below.resize(spatial_dim_count);
|
||||
|
||||
for (size_t i = 0; i < spatial_dim_count; i++)
|
||||
{
|
||||
backward_delta_out_pad_below[i] =
|
||||
(static_cast<ptrdiff_t>(filter_shape[i + 2]) - 1) * filter_dilation[i] -
|
||||
forward_in_pad_bellow[i];
|
||||
}
|
||||
// Compute backward pad out pad above
|
||||
CoordinateDiff backward_delta_out_pad_above;
|
||||
backward_delta_out_pad_above.resize(spatial_dim_count);
|
||||
|
||||
for (size_t i = 0; i < spatial_dim_count; i++)
|
||||
{
|
||||
backward_delta_out_pad_above[i] =
|
||||
(static_cast<ptrdiff_t>(filter_shape[i + 2]) - 1) * filter_dilation[i] +
|
||||
((forward_in_pad_bellow[i] + ((in_shape[i + 2]) - 1) * in_dilation[i] +
|
||||
forward_in_pad_above[i] -
|
||||
(static_cast<ptrdiff_t>(filter_shape[i + 2]) - 1) * filter_dilation[i]) %
|
||||
stride[i]) -
|
||||
forward_in_pad_above[i];
|
||||
}
|
||||
|
||||
general_convolution<OUTPUT, FILTER, INPUT, ACCUMULATION>(
|
||||
delta_out,
|
||||
@ -392,8 +352,8 @@ namespace ngraph
|
||||
stride,
|
||||
0,
|
||||
1,
|
||||
1,
|
||||
0,
|
||||
filter_out_channel_axis,
|
||||
filter_in_channel_axis,
|
||||
0,
|
||||
1);
|
||||
}
|
||||
|
@ -33,11 +33,11 @@ namespace ngraph
|
||||
private:
|
||||
struct NormalizedBBox
|
||||
{
|
||||
dataType xmin = 0;
|
||||
dataType ymin = 0;
|
||||
dataType xmax = 0;
|
||||
dataType ymax = 0;
|
||||
dataType size = 0;
|
||||
dataType xmin = dataType(0);
|
||||
dataType ymin = dataType(0);
|
||||
dataType xmax = dataType(0);
|
||||
dataType ymax = dataType(0);
|
||||
dataType size = dataType(0);
|
||||
};
|
||||
using LabelBBox = std::map<int, std::vector<NormalizedBBox>>;
|
||||
|
||||
|
@ -2,6 +2,7 @@
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//
|
||||
|
||||
#include <ngraph/ops.hpp>
|
||||
#include "ngraph/shape_util.hpp"
|
||||
|
||||
namespace ngraph
|
||||
@ -10,12 +11,12 @@ namespace ngraph
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
template <typename T, typename U>
|
||||
void extractImagePatches(const op::ExtractImagePatches* extImgPatches,
|
||||
const T* input,
|
||||
T* out,
|
||||
const Shape& inShape,
|
||||
const Shape& outShape)
|
||||
template <typename T>
|
||||
void extract_image_patches(const std::shared_ptr<op::ExtractImagePatches> extImgPatches,
|
||||
const T* input,
|
||||
T* out,
|
||||
const Shape& inShape,
|
||||
const Shape& outShape)
|
||||
{
|
||||
const size_t dimsSize = inShape.size();
|
||||
const size_t BATCH = 0, CHANNEL = 1, HIGHT = 0, WIDTH = 1;
|
||||
|
@ -0,0 +1,247 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <cmath>
|
||||
#include <cstddef>
|
||||
#include <numeric>
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
|
||||
#include "ngraph/shape.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
namespace
|
||||
{
|
||||
std::vector<size_t>
|
||||
calc_broadcast_index_offset(const std::vector<size_t>& memory_offsets,
|
||||
const std::vector<size_t>& broadcast_shape)
|
||||
{
|
||||
std::vector<size_t> broadcast_offsets(broadcast_shape.size(), 0);
|
||||
for (int i = broadcast_shape.size() - 2; i >= 0; --i)
|
||||
{
|
||||
if (broadcast_shape[i] == 1)
|
||||
{
|
||||
broadcast_offsets[i] = memory_offsets[i];
|
||||
}
|
||||
}
|
||||
if (!std::all_of(broadcast_shape.begin(),
|
||||
broadcast_shape.end(),
|
||||
[](size_t i) { return i == 1; }) &&
|
||||
broadcast_shape.back() == 1)
|
||||
{
|
||||
broadcast_offsets[broadcast_offsets.size() - 1] = 1;
|
||||
}
|
||||
if (broadcast_shape.back() == 1)
|
||||
{
|
||||
for (int i = broadcast_shape.size() - 1; i >= 0; --i)
|
||||
{
|
||||
if (broadcast_shape[i] != 1)
|
||||
{
|
||||
broadcast_offsets[i] = memory_offsets[i] - 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
return broadcast_offsets;
|
||||
}
|
||||
|
||||
size_t calc_full_broadcast_offset(const std::vector<size_t>& current_dims,
|
||||
const std::vector<size_t>& offsets)
|
||||
{
|
||||
size_t full_index_offset = 0;
|
||||
for (size_t i = 0; i < current_dims.size(); ++i)
|
||||
{
|
||||
full_index_offset += offsets[i] * current_dims[i];
|
||||
}
|
||||
return full_index_offset;
|
||||
}
|
||||
|
||||
void align_shape_sizes(Shape& shape, size_t target_size)
|
||||
{
|
||||
for (size_t i = 0; i < shape.size() - target_size; ++i)
|
||||
{
|
||||
shape.insert(shape.begin(), 1);
|
||||
}
|
||||
}
|
||||
|
||||
void increment_current_dim(std::vector<size_t>& current_dims,
|
||||
const std::vector<size_t>& shape,
|
||||
size_t incremented_dim_number)
|
||||
{
|
||||
current_dims[incremented_dim_number] += 1;
|
||||
if (current_dims[incremented_dim_number] == shape[incremented_dim_number] &&
|
||||
incremented_dim_number != 0)
|
||||
{
|
||||
for (size_t i = incremented_dim_number; i < shape.size(); ++i)
|
||||
{
|
||||
current_dims[i] = 0;
|
||||
}
|
||||
increment_current_dim(current_dims, shape, incremented_dim_number - 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
void fake_quantize(const T* arg,
|
||||
const T* in_low,
|
||||
const T* in_high,
|
||||
const T* out_low,
|
||||
const T* out_high,
|
||||
T* out,
|
||||
const Shape& arg_shape,
|
||||
const Shape& _in_low_shape,
|
||||
const Shape& _in_high_shape,
|
||||
const Shape& _out_low_shape,
|
||||
const Shape& _out_high_shape,
|
||||
size_t levels)
|
||||
{
|
||||
auto initial_round_mode = std::fegetround();
|
||||
std::fesetround(FE_TONEAREST);
|
||||
Shape in_low_shape(_in_low_shape);
|
||||
Shape in_high_shape(_in_high_shape);
|
||||
Shape out_low_shape(_out_low_shape);
|
||||
Shape out_high_shape(_out_high_shape);
|
||||
|
||||
if (in_low_shape.size() > arg_shape.size() ||
|
||||
in_high_shape.size() > arg_shape.size() ||
|
||||
out_low_shape.size() > arg_shape.size() ||
|
||||
out_high_shape.size() > arg_shape.size())
|
||||
{
|
||||
throw std::runtime_error(
|
||||
std::string("Tensors with inout\\output ranges should have rank less or "
|
||||
"equal to data tensor rank equal to ") +
|
||||
std::to_string(arg_shape.size()));
|
||||
}
|
||||
|
||||
std::vector<size_t> arg_memory_offsets(arg_shape.size(), 0);
|
||||
for (int i = arg_shape.size() - 2; i >= 0; i--)
|
||||
{
|
||||
arg_memory_offsets[i] = std::accumulate(
|
||||
arg_shape.begin() + i + 1, arg_shape.end(), 1, std::multiplies<size_t>());
|
||||
}
|
||||
align_shape_sizes(in_low_shape, arg_shape.size());
|
||||
align_shape_sizes(in_high_shape, arg_shape.size());
|
||||
align_shape_sizes(out_low_shape, arg_shape.size());
|
||||
align_shape_sizes(out_high_shape, arg_shape.size());
|
||||
|
||||
std::vector<size_t> in_low_offsets, in_high_offsets, out_low_offsets,
|
||||
out_high_offsets;
|
||||
bool in_low_trivial_broadcast = false;
|
||||
bool in_high_trivial_broadcast = false;
|
||||
bool out_low_trivial_broadcast = false;
|
||||
bool out_high_trivial_broadcast = false;
|
||||
bool in_low_aligned = false;
|
||||
bool in_high_aligned = false;
|
||||
bool out_low_aligned = false;
|
||||
bool out_high_aligned = false;
|
||||
|
||||
auto check_trivial_broadcast =
|
||||
[&arg_shape, &arg_memory_offsets](Shape& shape_to_check,
|
||||
std::vector<size_t>& target_offsets,
|
||||
bool& trivial_broadcast,
|
||||
bool& aligned) {
|
||||
if (shape_size(shape_to_check) == 1 || shape_size(shape_to_check) == 0)
|
||||
{
|
||||
trivial_broadcast = true;
|
||||
}
|
||||
else if (shape_to_check == arg_shape)
|
||||
{
|
||||
aligned = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
target_offsets =
|
||||
calc_broadcast_index_offset(arg_memory_offsets, shape_to_check);
|
||||
}
|
||||
};
|
||||
check_trivial_broadcast(
|
||||
in_low_shape, in_low_offsets, in_low_trivial_broadcast, in_low_aligned);
|
||||
check_trivial_broadcast(
|
||||
in_high_shape, in_high_offsets, in_high_trivial_broadcast, in_high_aligned);
|
||||
check_trivial_broadcast(
|
||||
out_low_shape, out_low_offsets, out_low_trivial_broadcast, out_low_aligned);
|
||||
check_trivial_broadcast(
|
||||
out_high_shape, out_high_offsets, out_high_trivial_broadcast, out_high_aligned);
|
||||
|
||||
std::vector<size_t> current_dim(arg_shape.size(), 0);
|
||||
|
||||
auto get_value = [¤t_dim](bool is_trivial_broadcast,
|
||||
bool is_aligned,
|
||||
const T* data,
|
||||
size_t idx,
|
||||
const std::vector<size_t>& offsets) {
|
||||
T val;
|
||||
if (is_aligned)
|
||||
{
|
||||
val = data[idx];
|
||||
}
|
||||
else if (is_trivial_broadcast)
|
||||
{
|
||||
val = data[0];
|
||||
}
|
||||
else
|
||||
{
|
||||
size_t index_offset = calc_full_broadcast_offset(current_dim, offsets);
|
||||
if (index_offset != 0)
|
||||
{
|
||||
NGRAPH_CHECK(idx >= index_offset, "Incorrect index offset value!");
|
||||
}
|
||||
val = data[idx - index_offset];
|
||||
}
|
||||
return val;
|
||||
};
|
||||
for (size_t i = 0; i < shape_size(arg_shape); ++i)
|
||||
{
|
||||
T in_low_val = get_value(
|
||||
in_low_trivial_broadcast, in_low_aligned, in_low, i, in_low_offsets);
|
||||
T in_high_val = get_value(
|
||||
in_high_trivial_broadcast, in_high_aligned, in_high, i, in_high_offsets);
|
||||
T out_low_val = get_value(
|
||||
out_low_trivial_broadcast, out_low_aligned, out_low, i, out_low_offsets);
|
||||
T out_high_val = get_value(out_high_trivial_broadcast,
|
||||
out_high_aligned,
|
||||
out_high,
|
||||
i,
|
||||
out_high_offsets);
|
||||
if (arg[i] <= in_low_val)
|
||||
{
|
||||
out[i] = out_low_val;
|
||||
}
|
||||
else if (arg[i] > in_high_val)
|
||||
{
|
||||
out[i] = out_high_val;
|
||||
}
|
||||
else
|
||||
{
|
||||
out[i] = nearbyint((arg[i] - in_low_val) / (in_high_val - in_low_val) *
|
||||
(levels - 1)) /
|
||||
(levels - 1) * (out_high_val - out_low_val) +
|
||||
out_low_val;
|
||||
}
|
||||
increment_current_dim(current_dim, arg_shape, arg_shape.size() - 1);
|
||||
}
|
||||
std::fesetround(initial_round_mode);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,76 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <cstddef>
|
||||
#include <ngraph/runtime/reference/mean.hpp>
|
||||
#include <ngraph/runtime/reference/multiply.hpp>
|
||||
#include <ngraph/runtime/reference/sqrt.hpp>
|
||||
#include <ngraph/runtime/reference/subtract.hpp>
|
||||
#include <ngraph/runtime/reference/sum.hpp>
|
||||
#include <ngraph/shape.hpp>
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void mvn(const T* arg,
|
||||
T* out,
|
||||
const Shape& in_shape,
|
||||
bool normalize_variance,
|
||||
AxisSet reduction_axes,
|
||||
double eps)
|
||||
{
|
||||
auto reduced_shape = reduce(in_shape, reduction_axes, true);
|
||||
std::vector<T> tmp_buffer(shape_size(in_shape));
|
||||
mean(arg, tmp_buffer.data(), in_shape, reduction_axes, true);
|
||||
subtract(arg,
|
||||
tmp_buffer.data(),
|
||||
out,
|
||||
in_shape,
|
||||
reduced_shape,
|
||||
op::AutoBroadcastSpec::NUMPY);
|
||||
|
||||
if (normalize_variance)
|
||||
{
|
||||
multiply(out, out, tmp_buffer.data(), shape_size(in_shape));
|
||||
std::vector<T> mean_value(shape_size(reduced_shape));
|
||||
mean(tmp_buffer.data(), mean_value.data(), in_shape, reduction_axes, true);
|
||||
|
||||
add(mean_value.data(),
|
||||
std::vector<T>(shape_size(reduced_shape), eps).data(),
|
||||
tmp_buffer.data(),
|
||||
reduced_shape,
|
||||
reduced_shape,
|
||||
op::AutoBroadcastSpec::NUMPY);
|
||||
sqrt(tmp_buffer.data(), tmp_buffer.data(), shape_size(reduced_shape));
|
||||
|
||||
divide(out,
|
||||
tmp_buffer.data(),
|
||||
out,
|
||||
in_shape,
|
||||
reduced_shape,
|
||||
op::AutoBroadcastSpec::NUMPY,
|
||||
true);
|
||||
}
|
||||
}
|
||||
} // namespace reference
|
||||
} // namespace runtime
|
||||
} // namespace ngraph
|
@ -109,8 +109,9 @@ namespace ngraph
|
||||
|
||||
// Define an empty pooling region to be zero
|
||||
bool is_empty = (h_end <= h_start) || (w_end <= w_start);
|
||||
output[pool_index] =
|
||||
is_empty ? 0 : std::numeric_limits<T>::lowest();
|
||||
output[pool_index] = is_empty
|
||||
? static_cast<T>(0)
|
||||
: std::numeric_limits<T>::lowest();
|
||||
|
||||
for (unsigned int h = h_start; h < h_end; h++)
|
||||
{
|
||||
@ -138,8 +139,10 @@ namespace ngraph
|
||||
T roi_height = (roi_h_end - roi_h_start) * (height - 1);
|
||||
T roi_width = (roi_w_end - roi_w_start) * (width - 1);
|
||||
|
||||
T roi_height_scale = (pooled_h > 1) ? roi_height / (pooled_h - 1) : 0;
|
||||
T roi_width_scale = (pooled_w > 1) ? roi_width / (pooled_w - 1) : 0;
|
||||
T roi_height_scale =
|
||||
(pooled_h > 1) ? roi_height / (pooled_h - 1) : static_cast<T>(0);
|
||||
T roi_width_scale =
|
||||
(pooled_w > 1) ? roi_width / (pooled_w - 1) : static_cast<T>(0);
|
||||
|
||||
for (unsigned int c = 0; c < channels; c++)
|
||||
{
|
||||
|
@ -32,11 +32,14 @@ namespace ngraph
|
||||
const T* arg1,
|
||||
const T* arg2,
|
||||
T* out,
|
||||
size_t count) // TODO: using char for bool, is this right?
|
||||
size_t arg0_count,
|
||||
size_t arg1_count,
|
||||
size_t arg2_count,
|
||||
size_t out_count)
|
||||
{
|
||||
for (size_t i = 0; i < count; i++)
|
||||
for (size_t i = 0; i < out_count; i++)
|
||||
{
|
||||
out[i] = arg0[i] ? arg1[i] : arg2[i];
|
||||
out[i] = arg0[i % arg0_count] ? arg1[i % arg1_count] : arg2[i % arg2_count];
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -0,0 +1,46 @@
|
||||
//*****************************************************************************
|
||||
// Copyright 2017-2020 Intel Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
|
||||
#pragma once
|
||||
|
||||
#include <cmath>
|
||||
#include <cstddef>
|
||||
|
||||
#include "ngraph/runtime/reference/autobroadcast_binop.hpp"
|
||||
#include "ngraph/shape_util.hpp"
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
namespace runtime
|
||||
{
|
||||
namespace reference
|
||||
{
|
||||
template <typename T>
|
||||
void squared_difference(const T* arg0,
|
||||
const T* arg1,
|
||||
T* out,
|
||||
const Shape& arg0_shape,
|
||||
const Shape& arg1_shape,
|
||||
const op::AutoBroadcastSpec& broadcast_spec)
|
||||
{
|
||||
autobroadcast_binop(
|
||||
arg0, arg1, out, arg0_shape, arg1_shape, broadcast_spec, [](T x, T y) -> T {
|
||||
return (x - y) * (x - y);
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
@ -186,8 +186,8 @@ void ngraph::replace_node(std::shared_ptr<Node> target,
|
||||
input.replace_source_output(replacement->output(output_order[i]));
|
||||
}
|
||||
}
|
||||
|
||||
replacement->add_node_control_dependents(target);
|
||||
replacement->add_node_control_dependencies(target);
|
||||
target->clear_control_dependents();
|
||||
}
|
||||
|
||||
@ -212,6 +212,7 @@ void ngraph::replace_node(const std::shared_ptr<Node>& target,
|
||||
if (replacement_nodes.find(replacement_node) == replacement_nodes.end())
|
||||
{
|
||||
replacement_node->add_node_control_dependents(target);
|
||||
replacement_node->add_node_control_dependencies(target);
|
||||
target->transfer_provenance_tags(replacement_node);
|
||||
replacement_nodes.insert(replacement_node);
|
||||
}
|
||||
|
@ -24,35 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ------------------------------- v0 ------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Add::type_info;
|
||||
|
||||
op::v0::Add::Add(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Add::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Add>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
bool op::v0::Add::visit_attributes(AttributeVisitor& visitor)
|
||||
{
|
||||
BinaryElementwiseArithmetic::visit_attributes(visitor);
|
||||
return true;
|
||||
}
|
||||
|
||||
shared_ptr<Node> ngraph::operator+(const Output<Node>& arg0, const Output<Node>& arg1)
|
||||
{
|
||||
return make_shared<op::Add>(arg0, arg1);
|
||||
}
|
||||
|
||||
namespace add
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -107,12 +78,6 @@ namespace add
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Add::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Add::evaluate");
|
||||
return add::evaluate_add(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ------------------------------- v1 ------------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::Add, "Add", 1, util::BinaryElementwiseArithmetic);
|
||||
@ -141,4 +106,4 @@ bool op::v1::Add::evaluate(const HostTensorVector& outputs, const HostTensorVect
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::Add::evaluate");
|
||||
return add::evaluate_add(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
}
|
@ -16,13 +16,19 @@
|
||||
#include <cmath>
|
||||
#include <cstddef>
|
||||
#include <memory>
|
||||
#include <numeric>
|
||||
#include <ops.hpp>
|
||||
|
||||
#include "ngraph/builder/make_constant.hpp"
|
||||
#include "ngraph/node.hpp"
|
||||
#include "ngraph/op/batch_to_space.hpp"
|
||||
#include "ngraph/opsets/opset3.hpp"
|
||||
#include "ngraph/shape.hpp"
|
||||
|
||||
#include "ngraph/runtime/opt_kernel/reshape.hpp"
|
||||
#include "ngraph/runtime/reference/strided_slice.hpp"
|
||||
#include "ngraph/slice_plan.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -134,3 +140,115 @@ bool ngraph::op::v1::BatchToSpace::visit_attributes(ngraph::AttributeVisitor& vi
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
bool ngraph::op::v1::BatchToSpace::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
auto data = inputs[0];
|
||||
size_t elem_size = data->get_element_type().size();
|
||||
|
||||
if (data->get_partial_shape().is_dynamic())
|
||||
{
|
||||
return false;
|
||||
}
|
||||
auto data_shape = data->get_shape();
|
||||
|
||||
if (!(data->get_shape().size() == 4 || data->get_shape().size() == 5))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
size_t block_values_size = shape_size(inputs[1]->get_shape());
|
||||
const auto* block_values = inputs[1]->get_data_ptr<int64_t>();
|
||||
const auto* crops_begin_values = inputs[2]->get_data_ptr<int64_t>();
|
||||
const auto* crops_end_values = inputs[3]->get_data_ptr<int64_t>();
|
||||
|
||||
Shape dispersed_shape(1);
|
||||
dispersed_shape.insert(dispersed_shape.end(), data_shape.begin(), data_shape.end());
|
||||
std::vector<size_t> axes_order(block_values_size + 1);
|
||||
std::vector<size_t> plain_axes_order(block_values_size + 1);
|
||||
std::iota(plain_axes_order.begin(), plain_axes_order.end(), 0);
|
||||
Shape squeezed_shape(data_shape.begin(), data_shape.end());
|
||||
if (squeezed_shape.size() > block_values_size)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
auto* flat_data = data->get_data_ptr<char>();
|
||||
std::vector<char> dispersed_data(shape_size(data_shape) * elem_size);
|
||||
|
||||
Shape post_transpose_shape(axes_order.size());
|
||||
std::vector<char> post_transpose_data(shape_size(data_shape) * elem_size);
|
||||
|
||||
for (size_t block_idx = 1; block_idx < block_values_size; ++block_idx)
|
||||
{
|
||||
dispersed_shape[0] = block_values[block_idx];
|
||||
dispersed_shape[1] /= block_values[block_idx];
|
||||
runtime::opt_kernel::reshape(flat_data,
|
||||
dispersed_data.data(),
|
||||
data_shape,
|
||||
plain_axes_order,
|
||||
dispersed_shape,
|
||||
elem_size);
|
||||
|
||||
size_t val = 1;
|
||||
for (size_t axis_idx = 0; axis_idx <= block_values_size; ++axis_idx)
|
||||
{
|
||||
if ((block_idx + 1) == axis_idx)
|
||||
{
|
||||
axes_order[axis_idx] = 0;
|
||||
}
|
||||
else
|
||||
{
|
||||
axes_order[axis_idx] = val;
|
||||
val++;
|
||||
}
|
||||
}
|
||||
for (size_t axis_idx = 0; axis_idx < axes_order.size(); ++axis_idx)
|
||||
{
|
||||
post_transpose_shape[axis_idx] = dispersed_shape[axes_order[axis_idx]];
|
||||
}
|
||||
|
||||
runtime::opt_kernel::reshape(dispersed_data.data(),
|
||||
post_transpose_data.data(),
|
||||
dispersed_shape,
|
||||
axes_order,
|
||||
post_transpose_shape,
|
||||
elem_size);
|
||||
squeezed_shape[0] = dispersed_shape[1];
|
||||
squeezed_shape[block_idx] *= block_values[block_idx];
|
||||
dispersed_shape[block_idx + 1] = squeezed_shape[block_idx];
|
||||
runtime::opt_kernel::reshape(post_transpose_data.data(),
|
||||
flat_data,
|
||||
post_transpose_shape,
|
||||
plain_axes_order,
|
||||
squeezed_shape,
|
||||
elem_size);
|
||||
data_shape = squeezed_shape;
|
||||
}
|
||||
|
||||
std::vector<int64_t> upperbounds_values(data_shape.size());
|
||||
for (size_t i = 0; i < data_shape.size(); ++i)
|
||||
{
|
||||
upperbounds_values[i] = data_shape[i] - crops_end_values[i];
|
||||
}
|
||||
|
||||
std::vector<size_t> begin_mask(data_shape.size(), 0);
|
||||
std::vector<size_t> end_mask(data_shape.size(), 0);
|
||||
|
||||
std::vector<int64_t> begins(shape_size(inputs[2]->get_shape()));
|
||||
begins.assign(crops_begin_values, crops_begin_values + shape_size(inputs[2]->get_shape()));
|
||||
|
||||
std::vector<int64_t> default_strides(begins.size(), 1);
|
||||
SlicePlan slice_plan = make_slice_plan(data_shape,
|
||||
begins,
|
||||
upperbounds_values,
|
||||
default_strides,
|
||||
begin_mask,
|
||||
end_mask,
|
||||
AxisSet(),
|
||||
AxisSet(),
|
||||
AxisSet());
|
||||
runtime::reference::strided_slice(
|
||||
flat_data, outputs[0]->get_data_ptr<char>(), data_shape, slice_plan, elem_size);
|
||||
return true;
|
||||
}
|
@ -221,8 +221,8 @@ OutputVector op::Clamp::decompose_op() const
|
||||
default: throw runtime_error("Unsupported data type in op Clamp"); break;
|
||||
}
|
||||
|
||||
auto max = make_shared<op::Maximum>(clamp_min, data);
|
||||
return {make_shared<op::Minimum>(clamp_max, max)};
|
||||
auto max = make_shared<op::v1::Maximum>(clamp_min, data);
|
||||
return {make_shared<op::v1::Minimum>(clamp_max, max)};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::Clamp::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
|
@ -16,12 +16,17 @@
|
||||
#include <cmath>
|
||||
#include <cstddef>
|
||||
#include <memory>
|
||||
#include <ngraph/op/constant.hpp>
|
||||
#include <ngraph/ops.hpp>
|
||||
#include <numeric>
|
||||
|
||||
#include "depth_to_space.hpp"
|
||||
#include "ngraph/builder/reshape.hpp"
|
||||
#include "ngraph/node.hpp"
|
||||
#include "ngraph/shape.hpp"
|
||||
|
||||
#include "ngraph/runtime/opt_kernel/reshape.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -32,7 +37,7 @@ NGRAPH_RTTI_DEFINITION(op::v0::DepthToSpace, "DepthToSpace", 0);
|
||||
op::DepthToSpace::DepthToSpace(const Output<Node>& data,
|
||||
const DepthToSpaceMode& mode,
|
||||
const size_t block_size)
|
||||
: FusedOp({data})
|
||||
: Op({data})
|
||||
, m_blocksize(block_size)
|
||||
, m_mode(mode)
|
||||
{
|
||||
@ -53,23 +58,73 @@ bool op::DepthToSpace::visit_attributes(AttributeVisitor& visitor)
|
||||
return true;
|
||||
}
|
||||
|
||||
OutputVector op::DepthToSpace::decompose_op() const
|
||||
shared_ptr<Node> op::DepthToSpace::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
auto data = input_value(0);
|
||||
auto data_shape = data.get_shape();
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
(data_shape.size() >= 3),
|
||||
"The input tensor with rank lower than 3 is not supported (input rank: ",
|
||||
data_shape.size(),
|
||||
")");
|
||||
|
||||
if (data_shape.size() == 3)
|
||||
if (new_args.size() != 1)
|
||||
{
|
||||
// Insert batch axis
|
||||
data_shape.insert(data_shape.begin(), 1);
|
||||
data = builder::opset1::reshape(data, data_shape);
|
||||
throw ngraph_error("Incorrect number of new arguments");
|
||||
}
|
||||
return make_shared<DepthToSpace>(new_args.at(0), m_mode, m_blocksize);
|
||||
}
|
||||
|
||||
void op::DepthToSpace::validate_and_infer_types()
|
||||
{
|
||||
PartialShape data_pshape = get_input_partial_shape(0);
|
||||
|
||||
const auto& data_type = get_input_element_type(0);
|
||||
|
||||
auto data = input_value(0);
|
||||
|
||||
if (data_pshape.is_static())
|
||||
{
|
||||
const auto& data_shape = data.get_shape();
|
||||
|
||||
NODE_VALIDATION_CHECK(
|
||||
this,
|
||||
!(data_shape.size() < 3),
|
||||
"The input tensor with rank lower than 3 is not supported (input rank: ",
|
||||
data_shape.size(),
|
||||
")");
|
||||
|
||||
auto divider = std::pow(m_blocksize, data_shape.size() - 2);
|
||||
NODE_VALIDATION_CHECK(this, (divider), "DepthToSpace: The divider must not be 0");
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
m_blocksize > 0 && !(data_shape[1] % m_blocksize),
|
||||
"DepthToSpace: The input data's 'channels' axis size: ",
|
||||
data_shape[1],
|
||||
" must be a equivalent to 'block_size'^'spatial_dims': ",
|
||||
divider);
|
||||
|
||||
auto out_shape = data_shape;
|
||||
out_shape[1] /= divider;
|
||||
for (size_t i = 2; i < out_shape.size(); i++)
|
||||
{
|
||||
out_shape[i] *= m_blocksize;
|
||||
}
|
||||
|
||||
set_output_size(1);
|
||||
set_output_type(0, data_type, out_shape);
|
||||
}
|
||||
else
|
||||
{
|
||||
set_output_type(0, data_type, PartialShape::dynamic());
|
||||
}
|
||||
}
|
||||
|
||||
bool op::DepthToSpace::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
const auto& data = inputs[0];
|
||||
const auto& out = outputs[0];
|
||||
const auto& out_shape = out->get_shape();
|
||||
size_t elem_size = data->get_element_type().size();
|
||||
|
||||
if (data->get_partial_shape().is_dynamic())
|
||||
{
|
||||
return false;
|
||||
}
|
||||
auto data_shape = data->get_shape();
|
||||
const size_t n_dim = data_shape.at(0);
|
||||
const size_t c_dim = data_shape.at(1);
|
||||
const size_t spatial_dim_index = 2;
|
||||
@ -111,8 +166,6 @@ OutputVector op::DepthToSpace::decompose_op() const
|
||||
case DepthToSpaceMode::DEPTH_FIRST:
|
||||
{
|
||||
dispersed_shape.insert(dispersed_shape.begin() + 1, c_flat);
|
||||
flat_node = builder::opset1::reshape(data, dispersed_shape);
|
||||
|
||||
axes_order.push_back(1);
|
||||
for (int i = spatial_dim_index; i < data_shape.size(); ++i)
|
||||
{
|
||||
@ -120,7 +173,6 @@ OutputVector op::DepthToSpace::decompose_op() const
|
||||
axes_order.push_back(i);
|
||||
}
|
||||
|
||||
flat_node = builder::opset1::reorder_axes(flat_node, axes_order);
|
||||
break;
|
||||
}
|
||||
// x' = reshape(data, [N, block_size, block_size, ..., block_size, C / (block_size ^ K), D1, D2,
|
||||
@ -132,36 +184,56 @@ OutputVector op::DepthToSpace::decompose_op() const
|
||||
default:
|
||||
{
|
||||
dispersed_shape.insert(dispersed_shape.begin() + spatial_dims + 1, c_flat);
|
||||
flat_node = builder::opset1::reshape(data, dispersed_shape);
|
||||
|
||||
axes_order.push_back(spatial_dims + 1);
|
||||
for (int i = 2; i < data_shape.size(); ++i)
|
||||
{
|
||||
axes_order.push_back(spatial_dims + i);
|
||||
axes_order.push_back(i - 1);
|
||||
}
|
||||
flat_node = builder::opset1::reorder_axes(flat_node, axes_order);
|
||||
break;
|
||||
}
|
||||
}
|
||||
std::vector<size_t> plain_axes_order(data_shape.size());
|
||||
std::iota(plain_axes_order.begin(), plain_axes_order.end(), 0);
|
||||
std::vector<char> dispersed_data(shape_size(data_shape) * elem_size);
|
||||
std::vector<char> transposed_data(shape_size(data_shape) * elem_size);
|
||||
|
||||
runtime::opt_kernel::reshape(data->get_data_ptr<char>(),
|
||||
dispersed_data.data(),
|
||||
data_shape,
|
||||
plain_axes_order,
|
||||
dispersed_shape,
|
||||
elem_size);
|
||||
|
||||
Shape post_transpose_shape(axes_order.size());
|
||||
for (size_t axis_idx = 0; axis_idx < axes_order.size(); ++axis_idx)
|
||||
{
|
||||
post_transpose_shape[axis_idx] = dispersed_shape[axes_order[axis_idx]];
|
||||
}
|
||||
runtime::opt_kernel::reshape(dispersed_data.data(),
|
||||
transposed_data.data(),
|
||||
dispersed_shape,
|
||||
axes_order,
|
||||
post_transpose_shape,
|
||||
elem_size);
|
||||
|
||||
Shape squeezed_shape{n_dim, c_flat};
|
||||
for (int i = spatial_dim_index; i < data_shape.size(); ++i)
|
||||
{
|
||||
squeezed_shape.push_back(data_shape.at(i) * bs);
|
||||
}
|
||||
flat_node = builder::opset1::reshape(flat_node, squeezed_shape);
|
||||
|
||||
return OutputVector{flat_node};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::DepthToSpace::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
if (new_args.size() != 1)
|
||||
for (size_t i = plain_axes_order.size() - 1; i < post_transpose_shape.size() - 1; ++i)
|
||||
{
|
||||
throw ngraph_error("Incorrect number of new arguments");
|
||||
plain_axes_order.push_back(plain_axes_order[i] + 1);
|
||||
}
|
||||
return make_shared<DepthToSpace>(new_args.at(0), m_mode, m_blocksize);
|
||||
runtime::opt_kernel::reshape(transposed_data.data(),
|
||||
out->get_data_ptr<char>(),
|
||||
post_transpose_shape,
|
||||
plain_axes_order,
|
||||
squeezed_shape,
|
||||
elem_size);
|
||||
return true;
|
||||
}
|
||||
|
||||
namespace ngraph
|
||||
{
|
||||
template <>
|
||||
|
@ -26,47 +26,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ------------------------------ v0 -------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Divide::type_info;
|
||||
|
||||
op::v0::Divide::Divide(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
op::v0::Divide::Divide(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
bool pythondiv,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
, m_pythondiv(pythondiv)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
bool op::v0::Divide::visit_attributes(AttributeVisitor& visitor)
|
||||
{
|
||||
BinaryElementwiseArithmetic::visit_attributes(visitor);
|
||||
visitor.on_attribute("m_pythondiv", m_pythondiv);
|
||||
return true;
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Divide::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Divide>(
|
||||
new_args.at(0), new_args.at(1), this->is_pythondiv(), this->get_autob());
|
||||
}
|
||||
|
||||
shared_ptr<Node> ngraph::operator/(const Output<Node>& arg0, const Output<Node>& arg1)
|
||||
{
|
||||
return make_shared<op::v0::Divide>(arg0, arg1);
|
||||
}
|
||||
|
||||
namespace divide
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -116,12 +75,6 @@ namespace divide
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Divide::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Divide::evaluate");
|
||||
return divide::evaluate_divide(inputs[0], inputs[1], outputs[0], get_autob(), is_pythondiv());
|
||||
}
|
||||
|
||||
// ------------------------------ v1 -------------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::Divide, "Divide", 1, util::BinaryElementwiseArithmetic);
|
||||
|
@ -69,4 +69,4 @@ shared_ptr<Node>
|
||||
{
|
||||
throw ngraph_error("Incorrect number of arguments");
|
||||
}
|
||||
}
|
||||
}
|
@ -24,24 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
//------------------------------- v0 -------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Equal::type_info;
|
||||
|
||||
op::v0::Equal::Equal(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseComparison(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Equal::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Equal>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace equal
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -88,12 +70,6 @@ namespace equal
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Equal::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Equal::evaluate");
|
||||
return equal::evaluate_equal(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
//------------------------------- v1 -------------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::Equal, "Equal", 1);
|
||||
|
@ -130,19 +130,21 @@ OutputVector op::FakeQuantize::decompose_op() const
|
||||
vector<size_t>(shape_size(input_data_shape), m_levels - 1));
|
||||
|
||||
// map the number of quantization levels to the nGraph's quantization and dequantization scales
|
||||
const auto quant_scale = (input_high - input_low) / levels_minus_one;
|
||||
const auto dequant_scale = (output_high - output_low) / levels_minus_one;
|
||||
const auto quant_scale = std::make_shared<op::v1::Divide>(
|
||||
std::make_shared<op::v1::Subtract>(input_high, input_low), levels_minus_one);
|
||||
const auto dequant_scale = std::make_shared<op::v1::Divide>(
|
||||
std::make_shared<op::v1::Subtract>(output_high, output_low), levels_minus_one);
|
||||
|
||||
// zero_point type needs to match the quantization output type
|
||||
const auto zero_point = Constant::create(element::Type_t::i32, data.get_shape(), {0.0});
|
||||
const auto axes = get_default_order(input_data_shape);
|
||||
|
||||
// clip the input data to the range <input_low;input_high>
|
||||
data =
|
||||
std::make_shared<op::Minimum>(input_high, std::make_shared<op::Maximum>(input_low, data));
|
||||
data = std::make_shared<op::v1::Minimum>(input_high,
|
||||
std::make_shared<op::v1::Maximum>(input_low, data));
|
||||
|
||||
// shift the input data so that it contains only positive values (and zeros)
|
||||
data = data - input_low;
|
||||
data = std::make_shared<op::v1::Subtract>(data, input_low);
|
||||
|
||||
shared_ptr<Node> quantized_data =
|
||||
make_shared<op::Quantize>(data,
|
||||
@ -155,10 +157,10 @@ OutputVector op::FakeQuantize::decompose_op() const
|
||||
quantized_data = make_shared<op::Convert>(quantized_data, input_data_type);
|
||||
|
||||
// dequantization without using the Dequantize op (just a multiplication by the dequant_scale)
|
||||
const auto dequantized_data = quantized_data * dequant_scale;
|
||||
const auto dequantized_data = make_shared<op::v1::Multiply>(quantized_data, dequant_scale);
|
||||
|
||||
// shift the results so that they fall into the <output_low;output_high> range
|
||||
return {dequantized_data + output_low};
|
||||
return {std::make_shared<op::v1::Add>(dequantized_data, output_low)};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::FakeQuantize::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
|
@ -58,7 +58,11 @@ OutputVector op::Gelu::decompose_op() const
|
||||
shared_ptr<ngraph::Node> sqrt_two =
|
||||
builder::make_constant(data.get_element_type(), data.get_shape(), std::sqrt(2.0));
|
||||
|
||||
return {half * data * (one + make_shared<ngraph::op::Erf>(data / sqrt_two))};
|
||||
shared_ptr<ngraph::Node> add = std::make_shared<op::v1::Add>(
|
||||
one, make_shared<ngraph::op::Erf>(std::make_shared<op::v1::Divide>(data, sqrt_two)));
|
||||
shared_ptr<ngraph::Node> multiply = std::make_shared<op::v1::Multiply>(half, data);
|
||||
|
||||
return {std::make_shared<op::v1::Multiply>(multiply, add)};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::Gelu::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
|
@ -24,24 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
//-------------------------------------- v0 ------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Greater::type_info;
|
||||
|
||||
op::v0::Greater::Greater(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseComparison(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Greater::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Greater>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace greaterop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -88,13 +70,6 @@ namespace greaterop
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Greater::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Greater::evaluate");
|
||||
return greaterop::evaluate_greater(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
//-------------------------------------- v1 ------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::Greater, "Greater", 1);
|
||||
|
@ -24,24 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
//---------------------------------- v0 ----------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::GreaterEq::type_info;
|
||||
|
||||
op::v0::GreaterEq::GreaterEq(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseComparison(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::GreaterEq::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::GreaterEq>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace greater_equalop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -88,13 +70,6 @@ namespace greater_equalop
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::GreaterEq::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::GreaterEq::evaluate");
|
||||
return greater_equalop::evaluate_greater_equal(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
//---------------------------------- v1 ----------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::GreaterEqual, "GreaterEqual", 1);
|
||||
|
@ -24,24 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ----------------------------- v0 --------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Less::type_info;
|
||||
|
||||
op::v0::Less::Less(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseComparison(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Less::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Less>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace lessop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -88,12 +70,6 @@ namespace lessop
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Less::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Less::evaluate");
|
||||
return lessop::evaluate_less(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ----------------------------- v1 --------------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::Less, "Less", 1);
|
||||
|
@ -94,27 +94,3 @@ bool op::v1::LessEqual::evaluate(const HostTensorVector& outputs,
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::LessEqual::evaluate");
|
||||
return less_equalop::evaluate_less_equal(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ---------------------------------- v0 ---------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::LessEq::type_info;
|
||||
|
||||
op::v0::LessEq::LessEq(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseComparison(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::LessEq::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<v0::LessEq>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
bool op::v0::LessEq::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::LessEq::evaluate");
|
||||
return less_equalop::evaluate_less_equal(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
@ -32,22 +32,6 @@ using namespace ngraph;
|
||||
|
||||
// ------------------------------------ v0 -------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Maximum::type_info;
|
||||
|
||||
op::v0::Maximum::Maximum(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Maximum::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Maximum>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace maximumop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -92,13 +76,6 @@ namespace maximumop
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Maximum::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Maximum::evaluate");
|
||||
return maximumop::evaluate_maximum(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ------------------------------------ v1 -------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v1::Maximum::type_info;
|
||||
|
@ -30,24 +30,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ------------------------------ v0 -------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Minimum::type_info;
|
||||
|
||||
op::v0::Minimum::Minimum(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Minimum::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Minimum>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace minimumop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -92,13 +74,6 @@ namespace minimumop
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Minimum::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Minimum::evaluate");
|
||||
return minimumop::evaluate_minimum(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ------------------------------ v1 -------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v1::Minimum::type_info;
|
||||
|
@ -24,24 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ------------------------------------ v0 -------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Multiply::type_info;
|
||||
|
||||
op::v0::Multiply::Multiply(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Multiply::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Multiply>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace multiplyop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -88,6 +70,24 @@ namespace multiplyop
|
||||
}
|
||||
}
|
||||
|
||||
// ------------------------------------ v0 -------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Multiply::type_info;
|
||||
|
||||
op::v0::Multiply::Multiply(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Multiply::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Multiply>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
bool op::v0::Multiply::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
@ -119,10 +119,3 @@ bool op::v1::Multiply::evaluate(const HostTensorVector& outputs,
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v1::Multiply::evaluate");
|
||||
return multiplyop::evaluate_multiply(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
shared_ptr<Node> ngraph::operator*(const Output<Node>& arg0, const Output<Node>& arg1)
|
||||
{
|
||||
return make_shared<op::Multiply>(arg0, arg1);
|
||||
}
|
||||
|
@ -79,8 +79,8 @@ OutputVector op::MVN::decompose_op() const
|
||||
|
||||
// calculate mean normalization
|
||||
auto mean = builder::opset1::mean(data, m_reduction_axes);
|
||||
auto mean_normalization =
|
||||
data - builder::opset1::make_broadcast(mean, data_shape, m_reduction_axes);
|
||||
auto mean_normalization = std::make_shared<op::v1::Subtract>(
|
||||
data, builder::opset1::make_broadcast(mean, data_shape, m_reduction_axes));
|
||||
|
||||
if (!m_normalize_variance)
|
||||
{
|
||||
@ -93,10 +93,10 @@ OutputVector op::MVN::decompose_op() const
|
||||
// add epsilon
|
||||
auto eps_node = op::Constant::create(
|
||||
data.get_element_type(), Output<Node>(variance).get_shape(), vector<double>{m_eps});
|
||||
variance = std::make_shared<op::Sqrt>(variance + eps_node);
|
||||
|
||||
return OutputVector{mean_normalization / builder::opset1::make_broadcast(
|
||||
variance, data_shape, m_reduction_axes)};
|
||||
variance = std::make_shared<op::Sqrt>(std::make_shared<op::v1::Add>(variance, eps_node));
|
||||
return OutputVector{std::make_shared<op::v1::Divide>(
|
||||
mean_normalization,
|
||||
builder::opset1::make_broadcast(variance, data_shape, m_reduction_axes))};
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -108,7 +108,7 @@ OutputVector op::NormalizeL2::decompose_op() const
|
||||
const auto axes = input_value(1);
|
||||
Output<Node> norm = builder::opset1::l2_norm(data, axes, m_eps, builder_bias_mode, true);
|
||||
|
||||
data = make_shared<op::Divide>(data, norm, AutoBroadcastSpec(AutoBroadcastType::NUMPY));
|
||||
data = make_shared<op::v1::Divide>(data, norm, AutoBroadcastSpec(AutoBroadcastType::NUMPY));
|
||||
|
||||
return OutputVector{data};
|
||||
}
|
||||
|
@ -24,24 +24,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ----------------------------------- v0 --------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::NotEqual::type_info;
|
||||
|
||||
op::v0::NotEqual::NotEqual(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseComparison(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::NotEqual::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::NotEqual>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace not_equalop
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -88,13 +70,6 @@ namespace not_equalop
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::NotEqual::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::NotEqual::evaluate");
|
||||
return not_equalop::evaluate_not_equal(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ----------------------------------- v1 --------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::NotEqual, "NotEqual", 1);
|
||||
|
@ -27,24 +27,6 @@ NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ------------------------------ v0 -------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Power::type_info;
|
||||
|
||||
op::v0::Power::Power(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Power::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Power>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
namespace power
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -91,12 +73,6 @@ namespace power
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Power::evaluate(const HostTensorVector& outputs, const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Power::evaluate");
|
||||
return power::evaluate_power(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ------------------------------ v1 -------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v1::Power::type_info;
|
||||
|
@ -75,14 +75,15 @@ OutputVector op::PRelu::decompose_op() const
|
||||
std::shared_ptr<ngraph::Node> zero_node = make_zero(data.get_element_type(), data.get_shape());
|
||||
|
||||
std::shared_ptr<ngraph::Node> negative_map = std::make_shared<ngraph::op::Convert>(
|
||||
std::make_shared<ngraph::op::Less>(data, zero_node), data.get_element_type());
|
||||
std::make_shared<ngraph::op::v1::Less>(data, zero_node), data.get_element_type());
|
||||
|
||||
std::shared_ptr<ngraph::Node> positive_map = std::make_shared<ngraph::op::Convert>(
|
||||
std::make_shared<ngraph::op::Greater>(data, zero_node), data.get_element_type());
|
||||
std::make_shared<ngraph::op::v1::Greater>(data, zero_node), data.get_element_type());
|
||||
|
||||
slope = negative_map * slope + positive_map;
|
||||
slope = std::make_shared<op::v1::Multiply>(negative_map,
|
||||
std::make_shared<op::v1::Add>(slope, positive_map));
|
||||
|
||||
return {data * slope};
|
||||
return {std::make_shared<op::v1::Multiply>(data, slope)};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::PRelu::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
|
@ -171,45 +171,3 @@ bool op::v1::Select::evaluate(const HostTensorVector& output_values,
|
||||
|
||||
return detail::evaluate_select(output_values, input_values, autob, get_output_element_type(0));
|
||||
}
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Select::type_info;
|
||||
|
||||
op::v0::Select::Select(const Output<Node>& arg0, const Output<Node>& arg1, const Output<Node>& arg2)
|
||||
: Op({arg0, arg1, arg2})
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
void op::v0::Select::validate_and_infer_types()
|
||||
{
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
get_input_element_type(0).is_dynamic() ||
|
||||
get_input_element_type(0) == element::Type_t::boolean,
|
||||
"Argument 0 must have boolean element type (element type: ",
|
||||
get_input_element_type(0),
|
||||
").");
|
||||
|
||||
PartialShape result_shape = get_input_partial_shape(0);
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
PartialShape::merge_into(result_shape, get_input_partial_shape(1)),
|
||||
"Argument shapes are inconsistent.");
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
PartialShape::merge_into(result_shape, get_input_partial_shape(2)),
|
||||
"Argument shapes are inconsistent.");
|
||||
|
||||
element::Type result_et;
|
||||
|
||||
NODE_VALIDATION_CHECK(
|
||||
this,
|
||||
element::Type::merge(result_et, get_input_element_type(1), get_input_element_type(2)),
|
||||
"Argument 1 and 2 element types are inconsistent.");
|
||||
|
||||
set_output_type(0, result_et, result_shape);
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Select::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<v0::Select>(new_args.at(0), new_args.at(1), new_args.at(2));
|
||||
}
|
||||
|
@ -13,10 +13,15 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//*****************************************************************************
|
||||
#include <numeric>
|
||||
|
||||
#include "ngraph/op/shuffle_channels.hpp"
|
||||
#include "ngraph/attribute_visitor.hpp"
|
||||
#include "ngraph/builder/reshape.hpp"
|
||||
#include "ngraph/op/shuffle_channels.hpp"
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
#include "ngraph/runtime/opt_kernel/reshape.hpp"
|
||||
#include "ngraph/type/element_type.hpp"
|
||||
#include "ngraph/type/element_type_traits.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
@ -28,7 +33,7 @@ constexpr NodeTypeInfo op::ShuffleChannels::type_info;
|
||||
op::ShuffleChannels::ShuffleChannels(const Output<Node>& data,
|
||||
const int64_t axis,
|
||||
const int64_t group)
|
||||
: FusedOp({data})
|
||||
: Op({data})
|
||||
, m_axis(axis)
|
||||
, m_group{group}
|
||||
{
|
||||
@ -61,8 +66,9 @@ size_t op::ShuffleChannels::get_zero_based_axis() const
|
||||
}
|
||||
}
|
||||
|
||||
void op::ShuffleChannels::pre_validate_and_infer_types()
|
||||
void op::ShuffleChannels::validate_and_infer_types()
|
||||
{
|
||||
const auto& data_type = get_input_element_type(0);
|
||||
if (get_input_partial_shape(0).is_static())
|
||||
{
|
||||
const auto shape = get_input_shape(0);
|
||||
@ -84,18 +90,13 @@ void op::ShuffleChannels::pre_validate_and_infer_types()
|
||||
this,
|
||||
channel_dim_size % m_group == 0,
|
||||
"The channel dimension size has to be a multiple of the groups parameter value.");
|
||||
set_output_size(1);
|
||||
set_output_type(0, data_type, shape);
|
||||
}
|
||||
else
|
||||
{
|
||||
set_output_type(0, data_type, PartialShape::dynamic());
|
||||
}
|
||||
}
|
||||
|
||||
OutputVector op::ShuffleChannels::decompose_op() const
|
||||
{
|
||||
const auto data = input_value(0);
|
||||
const auto& data_shape = data.get_shape();
|
||||
|
||||
const auto reshaped = builder::opset1::reshape(data, get_pre_shuffle_shape(data_shape));
|
||||
const auto shuffled = builder::opset1::reorder_axes(reshaped, {0, 2, 1, 3});
|
||||
|
||||
return {builder::opset1::reshape(shuffled, data_shape)};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::ShuffleChannels::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
@ -137,3 +138,46 @@ Shape op::ShuffleChannels::get_pre_shuffle_shape(const Shape& data_shape) const
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
bool op::ShuffleChannels::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
const auto arg = inputs[0]->get_data_ptr<const char>();
|
||||
auto out = outputs[0]->get_data_ptr<char>();
|
||||
Shape data_shape = inputs[0]->get_shape();
|
||||
const Shape& ds = data_shape;
|
||||
size_t elem_size = inputs[0]->get_element_type().size();
|
||||
|
||||
Shape reshaped_out_shape(4, 1);
|
||||
size_t axis_zb = m_axis >= 0 ? m_axis : m_axis + data_shape.size();
|
||||
for (size_t i = 0; i < axis_zb; ++i)
|
||||
{
|
||||
reshaped_out_shape[0] *= ds[i];
|
||||
}
|
||||
|
||||
reshaped_out_shape[1] = m_group;
|
||||
reshaped_out_shape[2] = ds[axis_zb] / m_group;
|
||||
|
||||
for (size_t i = axis_zb + 1; i < ds.size(); ++i)
|
||||
{
|
||||
reshaped_out_shape[3] *= ds[i];
|
||||
}
|
||||
size_t data_size = shape_size(data_shape) * elem_size;
|
||||
|
||||
// first reshape from data_shape to reshaped_out_shape is skipped since it doesn't affect out
|
||||
// data
|
||||
|
||||
Shape transpose_axes_order = {0, 2, 1, 3};
|
||||
Shape transposed_shape(transpose_axes_order.size());
|
||||
|
||||
for (size_t i = 0; i < transpose_axes_order.size(); ++i)
|
||||
{
|
||||
transposed_shape[i] = data_shape.at(transpose_axes_order.at(i));
|
||||
}
|
||||
auto axis_vector = AxisVector{begin(transpose_axes_order), end(transpose_axes_order)};
|
||||
runtime::opt_kernel::reshape(
|
||||
arg, out, reshaped_out_shape, axis_vector, transposed_shape, elem_size);
|
||||
|
||||
// last reshape from transposed_shape to data_shape is skipped since it doesn't affect out data
|
||||
return true;
|
||||
}
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <cmath>
|
||||
#include <cstddef>
|
||||
#include <memory>
|
||||
#include <numeric>
|
||||
|
||||
#include "ngraph/builder/make_constant.hpp"
|
||||
#include "ngraph/node.hpp"
|
||||
@ -23,6 +24,9 @@
|
||||
#include "ngraph/ops.hpp"
|
||||
#include "ngraph/shape.hpp"
|
||||
|
||||
#include "ngraph/runtime/opt_kernel/reshape.hpp"
|
||||
#include "ngraph/runtime/reference/pad.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -135,3 +139,132 @@ bool ngraph::op::v1::SpaceToBatch::visit_attributes(ngraph::AttributeVisitor& vi
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
bool ngraph::op::v1::SpaceToBatch::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
const auto& data = inputs[0];
|
||||
const auto& out = outputs[0];
|
||||
const auto& out_shape = out->get_shape();
|
||||
size_t elem_size = data->get_element_type().size();
|
||||
|
||||
if (data->get_partial_shape().is_dynamic())
|
||||
{
|
||||
return false;
|
||||
}
|
||||
auto data_shape = data->get_shape();
|
||||
|
||||
if (!(data->get_shape().size() == 4 || data->get_shape().size() == 5))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
size_t block_values_size = shape_size(inputs[1]->get_shape());
|
||||
const auto* block_values = inputs[1]->get_data_ptr<int64_t>();
|
||||
const auto* pads_begin = inputs[2]->get_data_ptr<int64_t>();
|
||||
const auto* pads_end = inputs[3]->get_data_ptr<int64_t>();
|
||||
|
||||
const char* pad_value = nullptr;
|
||||
const std::vector<char> pad_zero_value(elem_size, 0);
|
||||
if (inputs.size() == 4)
|
||||
{
|
||||
pad_value = inputs[3]->get_data_ptr<char>();
|
||||
}
|
||||
else
|
||||
{
|
||||
pad_value = pad_zero_value.data();
|
||||
}
|
||||
CoordinateDiff pads_begin_vec(shape_size(inputs[2]->get_shape()));
|
||||
pads_begin_vec.assign(pads_begin, pads_begin + shape_size(inputs[2]->get_shape()));
|
||||
CoordinateDiff pads_end_vec(shape_size(inputs[2]->get_shape()));
|
||||
pads_end_vec.assign(pads_end, pads_end + shape_size(inputs[2]->get_shape()));
|
||||
|
||||
Shape padded_shape(data_shape.size());
|
||||
for (size_t i = 0; i < data_shape.size(); ++i)
|
||||
{
|
||||
padded_shape[i] = data_shape[i] + pads_begin_vec[i] + pads_end_vec[i];
|
||||
}
|
||||
|
||||
std::vector<char> padded_data(shape_size(padded_shape) * elem_size);
|
||||
ngraph::runtime::reference::pad(data->get_data_ptr<char>(),
|
||||
pad_value,
|
||||
padded_data.data(),
|
||||
elem_size,
|
||||
data_shape,
|
||||
padded_shape,
|
||||
pads_begin_vec,
|
||||
pads_end_vec,
|
||||
ngraph::op::PadMode::CONSTANT);
|
||||
data_shape = padded_shape;
|
||||
|
||||
Shape dispersed_shape(block_values_size + 1);
|
||||
std::vector<size_t> axes_order(block_values_size + 1);
|
||||
Shape squeezed_shape(data_shape.begin(), data_shape.end());
|
||||
std::vector<size_t> plain_axes_order(block_values_size + 1);
|
||||
std::iota(plain_axes_order.begin(), plain_axes_order.end(), 0);
|
||||
|
||||
std::vector<char> flat_data(padded_data.begin(), padded_data.end());
|
||||
std::vector<char> dispersed_data(shape_size(data_shape) * elem_size);
|
||||
std::vector<char> post_transpose_data(shape_size(data_shape) * elem_size);
|
||||
|
||||
for (int64_t block_idx = block_values_size - 1; block_idx >= 0; --block_idx)
|
||||
{
|
||||
int64_t sq_shape_idx = block_values_size - 1;
|
||||
int64_t axis_idx = axes_order.size() - 1;
|
||||
for (int64_t shape_idx = dispersed_shape.size() - 1; shape_idx >= 0; --shape_idx)
|
||||
{
|
||||
if (shape_idx == (block_idx + 1))
|
||||
{
|
||||
dispersed_shape[shape_idx] = block_values[block_idx];
|
||||
axes_order[0] = shape_idx;
|
||||
}
|
||||
else if (shape_idx == block_idx)
|
||||
{
|
||||
dispersed_shape[shape_idx] = squeezed_shape[sq_shape_idx] / block_values[block_idx];
|
||||
axes_order[axis_idx] = shape_idx;
|
||||
axis_idx--;
|
||||
sq_shape_idx--;
|
||||
}
|
||||
else
|
||||
{
|
||||
dispersed_shape[shape_idx] = squeezed_shape[sq_shape_idx];
|
||||
axes_order[axis_idx] = shape_idx;
|
||||
axis_idx--;
|
||||
sq_shape_idx--;
|
||||
}
|
||||
}
|
||||
|
||||
runtime::opt_kernel::reshape(flat_data.data(),
|
||||
dispersed_data.data(),
|
||||
data_shape,
|
||||
plain_axes_order,
|
||||
dispersed_shape,
|
||||
elem_size);
|
||||
Shape post_transpose_shape(axes_order.size());
|
||||
for (size_t i = 0; i < axes_order.size(); ++i)
|
||||
{
|
||||
post_transpose_shape[i] = dispersed_shape[axes_order[i]];
|
||||
}
|
||||
|
||||
runtime::opt_kernel::reshape(dispersed_data.data(),
|
||||
post_transpose_data.data(),
|
||||
dispersed_shape,
|
||||
axes_order,
|
||||
post_transpose_shape,
|
||||
elem_size);
|
||||
squeezed_shape[0] *= block_values[block_idx];
|
||||
squeezed_shape[block_idx] /= block_values[block_idx];
|
||||
|
||||
runtime::opt_kernel::reshape(post_transpose_data.data(),
|
||||
flat_data.data(),
|
||||
post_transpose_shape,
|
||||
plain_axes_order,
|
||||
squeezed_shape,
|
||||
elem_size);
|
||||
data_shape = squeezed_shape;
|
||||
}
|
||||
|
||||
out->write(flat_data.data(), elem_size * shape_size(out->get_shape()));
|
||||
|
||||
return true;
|
||||
}
|
@ -16,11 +16,14 @@
|
||||
#include <cmath>
|
||||
#include <cstddef>
|
||||
#include <memory>
|
||||
#include <numeric>
|
||||
|
||||
#include "ngraph/attribute_visitor.hpp"
|
||||
#include "ngraph/builder/reshape.hpp"
|
||||
#include "ngraph/op/space_to_depth.hpp"
|
||||
#include "ngraph/shape.hpp"
|
||||
#include "space_to_depth.hpp"
|
||||
|
||||
#include "ngraph/runtime/opt_kernel/reshape.hpp"
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
@ -32,7 +35,7 @@ constexpr NodeTypeInfo op::SpaceToDepth::type_info;
|
||||
op::SpaceToDepth::SpaceToDepth(const Output<Node>& data,
|
||||
const SpaceToDepthMode& mode,
|
||||
size_t block_size)
|
||||
: FusedOp({data})
|
||||
: Op({data})
|
||||
, m_blocksize(block_size)
|
||||
, m_mode(mode)
|
||||
{
|
||||
@ -51,26 +54,74 @@ bool ngraph::op::v0::SpaceToDepth::visit_attributes(AttributeVisitor& visitor)
|
||||
return true;
|
||||
}
|
||||
|
||||
OutputVector op::SpaceToDepth::decompose_op() const
|
||||
shared_ptr<Node> op::SpaceToDepth::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
auto data = input_value(0);
|
||||
auto data_shape = data.get_shape();
|
||||
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
(data_shape.size() >= 3),
|
||||
"The input tensor with rank lower than 3 is not supported (input rank: ",
|
||||
data_shape.size(),
|
||||
")");
|
||||
|
||||
NODE_VALIDATION_CHECK(this, m_blocksize > 0, "m_blocksize must be greater than 0");
|
||||
|
||||
if (data_shape.size() == 3)
|
||||
if (new_args.size() != 1)
|
||||
{
|
||||
// Insert batch axis
|
||||
data_shape.insert(data_shape.begin(), 1);
|
||||
data = builder::opset1::reshape(data, data_shape);
|
||||
throw ngraph_error("Incorrect number of new arguments");
|
||||
}
|
||||
return make_shared<SpaceToDepth>(new_args.at(0), m_mode, m_blocksize);
|
||||
}
|
||||
|
||||
void ngraph::op::v0::SpaceToDepth::validate_and_infer_types()
|
||||
{
|
||||
PartialShape data_pshape = get_input_partial_shape(0);
|
||||
|
||||
const auto& data_type = get_input_element_type(0);
|
||||
|
||||
auto data = input_value(0);
|
||||
|
||||
if (data_pshape.is_static())
|
||||
{
|
||||
const auto& data_shape = data.get_shape();
|
||||
|
||||
NODE_VALIDATION_CHECK(
|
||||
this,
|
||||
!(data_shape.size() < 3),
|
||||
"The input tensor with rank lower than 3 is not supported (input rank: ",
|
||||
data_shape.size(),
|
||||
")");
|
||||
|
||||
auto multiplier = std::pow(m_blocksize, data_shape.size() - 2);
|
||||
|
||||
auto out_shape = data_shape;
|
||||
out_shape[1] *= multiplier;
|
||||
for (size_t i = 2; i < out_shape.size(); i++)
|
||||
{
|
||||
NODE_VALIDATION_CHECK(this,
|
||||
m_blocksize > 0 && !(out_shape[i] % m_blocksize),
|
||||
"The dimension on position: ",
|
||||
i,
|
||||
" equal to: ",
|
||||
out_shape[i],
|
||||
" must be a multiple of m_blocksize: ",
|
||||
m_blocksize);
|
||||
|
||||
out_shape[i] /= m_blocksize;
|
||||
}
|
||||
|
||||
set_output_size(1);
|
||||
set_output_type(0, data_type, out_shape);
|
||||
}
|
||||
else
|
||||
{
|
||||
set_output_type(0, data_type, PartialShape::dynamic());
|
||||
}
|
||||
}
|
||||
|
||||
bool ngraph::op::v0::SpaceToDepth::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
const auto& data = inputs[0];
|
||||
const auto& out = outputs[0];
|
||||
const auto& out_shape = out->get_shape();
|
||||
size_t elem_size = data->get_element_type().size();
|
||||
|
||||
if (data->get_partial_shape().is_dynamic())
|
||||
{
|
||||
return false;
|
||||
}
|
||||
auto data_shape = data->get_shape();
|
||||
const size_t n_dim = data_shape.at(0);
|
||||
const size_t c_dim = data_shape.at(1);
|
||||
const size_t spatial_dim_index = 2;
|
||||
@ -97,7 +148,15 @@ OutputVector op::SpaceToDepth::decompose_op() const
|
||||
dispersed_shape.push_back(data_shape.at(i + spatial_dim_index) / m_blocksize);
|
||||
dispersed_shape.push_back(m_blocksize);
|
||||
}
|
||||
auto flat_node = builder::opset1::reshape(data, dispersed_shape);
|
||||
std::vector<size_t> plain_axes_order(data_shape.size());
|
||||
std::iota(plain_axes_order.begin(), plain_axes_order.end(), 0);
|
||||
std::vector<char> dispersed_data(shape_size(data_shape) * elem_size);
|
||||
runtime::opt_kernel::reshape(data->get_data_ptr<char>(),
|
||||
dispersed_data.data(),
|
||||
data_shape,
|
||||
plain_axes_order,
|
||||
dispersed_shape,
|
||||
elem_size);
|
||||
// calculate axes to transpose
|
||||
// [0, 3, 5, ..., spatial_dims + (spatial_dims + 1), 2, 4, ..., K + K])
|
||||
vector<size_t> axes_order{0};
|
||||
@ -131,25 +190,37 @@ OutputVector op::SpaceToDepth::decompose_op() const
|
||||
default: { axes_order.insert(axes_order.begin() + spatial_dims + 1, 1);
|
||||
}
|
||||
}
|
||||
flat_node = builder::opset1::reorder_axes(flat_node, axes_order);
|
||||
std::vector<char> transposed_data(shape_size(data_shape) * elem_size);
|
||||
Shape post_transpose_shape(axes_order.size());
|
||||
for (size_t axis_idx = 0; axis_idx < axes_order.size(); ++axis_idx)
|
||||
{
|
||||
post_transpose_shape[axis_idx] = dispersed_shape[axes_order[axis_idx]];
|
||||
}
|
||||
|
||||
runtime::opt_kernel::reshape(dispersed_data.data(),
|
||||
transposed_data.data(),
|
||||
dispersed_shape,
|
||||
axes_order,
|
||||
post_transpose_shape,
|
||||
elem_size);
|
||||
|
||||
Shape squeezed_shape{n_dim};
|
||||
for (int i = 0; i < spatial_dims; ++i)
|
||||
{
|
||||
squeezed_shape.push_back(data_shape.at(spatial_dim_index + i) / m_blocksize);
|
||||
}
|
||||
squeezed_shape.insert(squeezed_shape.begin() + 1, c_dim * std::pow(m_blocksize, spatial_dims));
|
||||
flat_node = builder::opset1::reshape(flat_node, squeezed_shape);
|
||||
|
||||
return OutputVector{flat_node};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::SpaceToDepth::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
if (new_args.size() != 1)
|
||||
for (size_t i = plain_axes_order.size() - 1; i < post_transpose_shape.size() - 1; ++i)
|
||||
{
|
||||
throw ngraph_error("Incorrect number of new arguments");
|
||||
plain_axes_order.push_back(plain_axes_order[i] + 1);
|
||||
}
|
||||
return make_shared<SpaceToDepth>(new_args.at(0), m_mode, m_blocksize);
|
||||
runtime::opt_kernel::reshape(transposed_data.data(),
|
||||
out->get_data_ptr<char>(),
|
||||
post_transpose_shape,
|
||||
plain_axes_order,
|
||||
squeezed_shape,
|
||||
elem_size);
|
||||
return true;
|
||||
}
|
||||
|
||||
namespace ngraph
|
||||
|
@ -48,9 +48,9 @@ OutputVector op::SquaredDifference::decompose_op() const
|
||||
const auto x1 = input_value(0);
|
||||
const auto x2 = input_value(1);
|
||||
|
||||
const auto difference = make_shared<op::Subtract>(x1, x2, m_autobroadcast);
|
||||
const auto difference = make_shared<op::v1::Subtract>(x1, x2, m_autobroadcast);
|
||||
|
||||
return {difference * difference};
|
||||
return {make_shared<op::v1::Multiply>(difference, difference)};
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::SquaredDifference::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
|
@ -154,38 +154,6 @@ namespace squeeze
|
||||
const HostTensorPtr& out)
|
||||
{
|
||||
auto element_type = arg0->get_element_type();
|
||||
out->set_element_type(element_type);
|
||||
|
||||
auto data_shape = arg0->get_shape();
|
||||
int64_t data_rank = static_cast<int64_t>(data_shape.size());
|
||||
auto axes_shape = arg1->get_shape();
|
||||
NGRAPH_CHECK(axes_shape.size() <= 1, "Axes to remove must be a vector or empty.");
|
||||
|
||||
auto out_shape = data_shape;
|
||||
// Empty axes vector
|
||||
if (axes_shape.size() == 0 || axes_shape[0] == 0)
|
||||
{
|
||||
out_shape.erase(std::remove(out_shape.begin(), out_shape.end(), 1), out_shape.end());
|
||||
}
|
||||
else
|
||||
{
|
||||
// Get axes
|
||||
vector<int64_t> axes = read_index_vector(arg1);
|
||||
// Normalize axes
|
||||
std::transform(axes.begin(),
|
||||
axes.end(),
|
||||
axes.begin(),
|
||||
[data_rank](int64_t i) -> int64_t { return i < 0 ? data_rank + i : i; });
|
||||
// Sort in decreasing order
|
||||
std::set<int64_t, greater<int64_t>> axes_set(axes.begin(), axes.end());
|
||||
for (int64_t axis : axes_set)
|
||||
{
|
||||
NGRAPH_CHECK(axis >= 0 && axis < data_rank, "Axis is out of bounds: ", axis);
|
||||
NGRAPH_CHECK(out_shape[axis] == 1, "Only axis of size 1 can be removed.");
|
||||
out_shape.erase(out_shape.begin() + axis);
|
||||
}
|
||||
}
|
||||
out->set_shape(out_shape);
|
||||
|
||||
bool rc = true;
|
||||
switch (element_type)
|
||||
|
@ -20,34 +20,9 @@
|
||||
#include "ngraph/runtime/host_tensor.hpp"
|
||||
#include "ngraph/runtime/reference/subtract.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
// ------------------------------- v0 ------------------------------------------
|
||||
|
||||
constexpr NodeTypeInfo op::v0::Subtract::type_info;
|
||||
|
||||
op::v0::Subtract::Subtract(const Output<Node>& arg0,
|
||||
const Output<Node>& arg1,
|
||||
const AutoBroadcastSpec& auto_broadcast)
|
||||
: BinaryElementwiseArithmetic(arg0, arg1, auto_broadcast)
|
||||
{
|
||||
constructor_validate_and_infer_types();
|
||||
}
|
||||
|
||||
shared_ptr<Node> op::v0::Subtract::clone_with_new_inputs(const OutputVector& new_args) const
|
||||
{
|
||||
check_new_args_count(this, new_args);
|
||||
return make_shared<op::v0::Subtract>(new_args.at(0), new_args.at(1), this->get_autob());
|
||||
}
|
||||
|
||||
shared_ptr<ngraph::Node> ngraph::operator-(const Output<Node> arg0, const Output<Node> arg1)
|
||||
{
|
||||
return make_shared<op::v0::Subtract>(arg0, arg1);
|
||||
}
|
||||
|
||||
namespace subtract
|
||||
{
|
||||
template <element::Type_t ET>
|
||||
@ -94,13 +69,6 @@ namespace subtract
|
||||
}
|
||||
}
|
||||
|
||||
bool op::v0::Subtract::evaluate(const HostTensorVector& outputs,
|
||||
const HostTensorVector& inputs) const
|
||||
{
|
||||
OV_ITT_SCOPED_TASK(itt::domains::nGraphOp, "op::v0::Subtract::evaluate");
|
||||
return subtract::evaluate_subtract(inputs[0], inputs[1], outputs[0], get_autob());
|
||||
}
|
||||
|
||||
// ------------------------------- v1 ------------------------------------------
|
||||
|
||||
NGRAPH_RTTI_DEFINITION(op::v1::Subtract, "Subtract", 1, util::BinaryElementwiseArithmetic);
|
||||
|
@ -94,20 +94,14 @@ bool ngraph::op::is_constant(const ngraph::Node* node)
|
||||
|
||||
bool ngraph::op::is_commutative(const ngraph::Node* node)
|
||||
{
|
||||
return dynamic_cast<const ngraph::op::v0::Add*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::Add*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v0::Maximum*>(node) != nullptr ||
|
||||
return dynamic_cast<const ngraph::op::v1::Add*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::Maximum*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v0::Equal*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::Equal*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v0::NotEqual*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::NotEqual*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::LogicalAnd*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v0::Xor*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::LogicalXor*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v0::Minimum*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::Minimum*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v0::Multiply*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::Multiply*>(node) != nullptr ||
|
||||
dynamic_cast<const ngraph::op::v1::LogicalOr*>(node) != nullptr;
|
||||
}
|
||||
|
@ -1145,7 +1145,6 @@ pair<bool, uint64_t> ngraph::maximum_value(const Output<Node>& value)
|
||||
{op::v0::Constant::type_info, exec_constant},
|
||||
{op::v0::Convert::type_info, exec_nop},
|
||||
{op::v1::Gather::type_info, exec_gather},
|
||||
{op::v0::Minimum::type_info, exec_minimum},
|
||||
{op::v1::Minimum::type_info, exec_minimum},
|
||||
{op::v1::ReduceMin::type_info, exec_reduce_min},
|
||||
{op::v1::Reshape::type_info, exec_nop},
|
||||
|
@ -58,8 +58,10 @@ namespace ngraph
|
||||
const int split_parts = 2 * 3;
|
||||
const auto split_bias =
|
||||
builder::opset1::split(bias, split_parts, 1);
|
||||
const auto wr_z_bias = split_bias.at(0) + split_bias.at(3);
|
||||
const auto wr_r_bias = split_bias.at(1) + split_bias.at(4);
|
||||
const auto wr_z_bias = std::make_shared<ngraph::op::v1::Add>(
|
||||
split_bias.at(0), split_bias.at(3));
|
||||
const auto wr_r_bias = std::make_shared<ngraph::op::v1::Add>(
|
||||
split_bias.at(1), split_bias.at(4));
|
||||
// The result has shape: [num_directions, 4 * hidden_size]
|
||||
// and data layout:
|
||||
// [
|
||||
|
@ -66,7 +66,8 @@ namespace ngraph
|
||||
auto bias = ng_inputs.at(3);
|
||||
auto split_bias = builder::opset1::split(bias, 2, 1);
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
m_map[OpInput::B] = split_bias.at(0) + split_bias.at(1);
|
||||
m_map[OpInput::B] =
|
||||
std::make_shared<ngraph::op::v1::Add>(split_bias.at(0), split_bias.at(1));
|
||||
NGRAPH_SUPPRESS_DEPRECATED_END
|
||||
}
|
||||
else
|
||||
|
@ -41,27 +41,27 @@ void regclass_pyngraph_Node(py::module m)
|
||||
node.doc() = "ngraph.impl.Node wraps ngraph::Node";
|
||||
node.def("__add__",
|
||||
[](const std::shared_ptr<ngraph::Node>& a, const std::shared_ptr<ngraph::Node> b) {
|
||||
return a + b;
|
||||
return std::make_shared<ngraph::op::v1::Add>(a, b);
|
||||
},
|
||||
py::is_operator());
|
||||
node.def("__sub__",
|
||||
[](const std::shared_ptr<ngraph::Node>& a, const std::shared_ptr<ngraph::Node> b) {
|
||||
return a - b;
|
||||
return std::make_shared<ngraph::op::v1::Subtract>(a, b);
|
||||
},
|
||||
py::is_operator());
|
||||
node.def("__mul__",
|
||||
[](const std::shared_ptr<ngraph::Node>& a, const std::shared_ptr<ngraph::Node> b) {
|
||||
return a * b;
|
||||
return std::make_shared<ngraph::op::v1::Multiply>(a, b);
|
||||
},
|
||||
py::is_operator());
|
||||
node.def("__div__",
|
||||
[](const std::shared_ptr<ngraph::Node>& a, const std::shared_ptr<ngraph::Node> b) {
|
||||
return a / b;
|
||||
return std::make_shared<ngraph::op::v1::Divide>(a, b);
|
||||
},
|
||||
py::is_operator());
|
||||
node.def("__truediv__",
|
||||
[](const std::shared_ptr<ngraph::Node>& a, const std::shared_ptr<ngraph::Node> b) {
|
||||
return a / b;
|
||||
return std::make_shared<ngraph::op::v1::Divide>(a, b);
|
||||
},
|
||||
py::is_operator());
|
||||
|
||||
|
@ -235,7 +235,6 @@ endif()
|
||||
|
||||
if (NGRAPH_INTERPRETER_ENABLE)
|
||||
list(APPEND SRC
|
||||
backend_debug_api.cpp
|
||||
builder.cpp
|
||||
backend_api.cpp)
|
||||
set(ACTIVE_BACKEND_LIST ${ACTIVE_BACKEND_LIST} INTERPRETER)
|
||||
@ -318,7 +317,6 @@ set(MULTI_TEST_SRC
|
||||
backend/pad.in.cpp
|
||||
backend/parameter_as_output.in.cpp
|
||||
backend/power.in.cpp
|
||||
backend/quantize_dequantize.in.cpp
|
||||
backend/range.in.cpp
|
||||
backend/reduce_max.in.cpp
|
||||
backend/reduce_mean.in.cpp
|
||||
|
@ -20,8 +20,6 @@
|
||||
#include "util/test_case.hpp"
|
||||
#include "util/test_control.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -34,7 +32,8 @@ NGRAPH_TEST(${BACKEND_NAME}, abc)
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto C = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>((A + B) * C, ParameterVector{A, B, C});
|
||||
auto arg = make_shared<op::v1::Multiply>(make_shared<op::v1::Add>(A, B), C);
|
||||
auto f = make_shared<Function>(arg, ParameterVector{A, B, C});
|
||||
|
||||
std::vector<float> a{1, 2, 3, 4};
|
||||
std::vector<float> b{5, 6, 7, 8};
|
||||
@ -65,7 +64,8 @@ NGRAPH_TEST(${BACKEND_NAME}, abc_int64)
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::i64, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::i64, shape);
|
||||
auto C = make_shared<op::Parameter>(element::Type_t::i64, shape);
|
||||
auto f = make_shared<Function>((A + B) * C, ParameterVector{A, B, C});
|
||||
auto arg = make_shared<op::v1::Multiply>(make_shared<op::v1::Add>(A, B), C);
|
||||
auto f = make_shared<Function>(arg, ParameterVector{A, B, C});
|
||||
|
||||
std::vector<int64_t> a{1, 2, 3, 4};
|
||||
std::vector<int64_t> b{5, 6, 7, 8};
|
||||
|
@ -37,8 +37,6 @@
|
||||
#include "util/test_case.hpp"
|
||||
#include "util/test_control.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -50,7 +48,7 @@ NGRAPH_TEST(${BACKEND_NAME}, add)
|
||||
Shape shape{2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Add>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Add>(A, B), ParameterVector{A, B});
|
||||
|
||||
vector<float> a{1, 2, 3, 4};
|
||||
vector<float> b{5, 6, 7, 8};
|
||||
@ -66,7 +64,7 @@ NGRAPH_TEST(${BACKEND_NAME}, add_overload)
|
||||
Shape shape{2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(A + B, ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Add>(A, B), ParameterVector{A, B});
|
||||
|
||||
vector<float> a{1, 2, 3, 4};
|
||||
vector<float> b{5, 6, 7, 8};
|
||||
@ -82,10 +80,10 @@ NGRAPH_TEST(${BACKEND_NAME}, add_in_place)
|
||||
Shape shape{2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto T = A + B;
|
||||
auto T2 = T + T;
|
||||
auto T3 = T2 + T2;
|
||||
auto T4 = T3 + T3;
|
||||
auto T = make_shared<op::v1::Add>(A, B);
|
||||
auto T2 = make_shared<op::v1::Add>(T, T);
|
||||
auto T3 = make_shared<op::v1::Add>(T2, T2);
|
||||
auto T4 = make_shared<op::v1::Add>(T3, T3);
|
||||
|
||||
auto f = make_shared<Function>(T4, ParameterVector{A, B});
|
||||
|
||||
|
@ -20,8 +20,6 @@
|
||||
#include "util/test_case.hpp"
|
||||
#include "util/test_control.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -33,9 +31,9 @@ NGRAPH_TEST(${BACKEND_NAME}, aliased_output)
|
||||
Shape shape{2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto C = A + B;
|
||||
auto D = A * B;
|
||||
auto E = op::Constant::create(element::Type_t::f32, shape, {1, 2, 3, 4});
|
||||
auto C = make_shared<op::v1::Add>(A, B);
|
||||
auto D = make_shared<op::v1::Multiply>(A, B);
|
||||
auto E = op::Constant::create(element::f32, shape, {1, 2, 3, 4});
|
||||
auto f = make_shared<Function>(NodeVector{C, C, D, D, C, E, E}, ParameterVector{A, B});
|
||||
|
||||
vector<float> a{0, 1, 2, 3};
|
||||
|
@ -24,8 +24,6 @@
|
||||
#include "util/test_control.hpp"
|
||||
#include "util/test_tools.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -37,7 +35,7 @@ NGRAPH_TEST(${BACKEND_NAME}, create_tensor_1)
|
||||
Shape shape{2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Add>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Add>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -63,7 +61,8 @@ NGRAPH_TEST(${BACKEND_NAME}, get_parameters_and_results)
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto C = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>((A + B) * C, ParameterVector{A, B, C});
|
||||
auto arg = make_shared<op::v1::Multiply>(make_shared<op::v1::Add>(A, B), C);
|
||||
auto f = make_shared<Function>(arg, ParameterVector{A, B, C});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
|
@ -114,7 +114,7 @@ NGRAPH_TEST(${BACKEND_NAME}, auto_bcast_binary_elementwise_pdpd_dynamic)
|
||||
auto b = make_shared<op::Parameter>(element::Type_t::f32, pshape_b);
|
||||
|
||||
op::AutoBroadcastSpec autob = op::AutoBroadcastSpec(op::AutoBroadcastType::PDPD, -1);
|
||||
auto f = make_shared<Function>(make_shared<op::Add>(a, b, autob), ParameterVector{a, b});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Add>(a, b, autob), ParameterVector{a, b});
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}", true);
|
||||
auto ex = backend->compile(f);
|
||||
|
||||
@ -132,7 +132,7 @@ NGRAPH_TEST(${BACKEND_NAME}, auto_bcast_binary_elementwise_pdpd_dynamic)
|
||||
|
||||
// a shape {2, 3, 4, 5}, b shape {3, 4} axis = 1
|
||||
autob = op::AutoBroadcastSpec(op::AutoBroadcastType::PDPD, 1);
|
||||
f = make_shared<Function>(make_shared<op::Add>(a, b, autob), ParameterVector{a, b});
|
||||
f = make_shared<Function>(make_shared<op::v1::Add>(a, b, autob), ParameterVector{a, b});
|
||||
ex = backend->compile(f);
|
||||
t_r = backend->create_dynamic_tensor(element::Type_t::f32, PartialShape::dynamic());
|
||||
t_a = backend->create_tensor(element::Type_t::f32, Shape{2, 3, 4, 5});
|
||||
@ -157,21 +157,21 @@ NGRAPH_TEST(${BACKEND_NAME}, auto_bcast_string_cast)
|
||||
auto a = make_shared<op::Parameter>(element::Type_t::f32, Shape{1});
|
||||
auto b = make_shared<op::Parameter>(element::Type_t::f32, Shape{1});
|
||||
|
||||
auto add = make_shared<op::Add>(a, b, "NUMPY");
|
||||
auto add = make_shared<op::v1::Add>(a, b, "NUMPY");
|
||||
ASSERT_EQ(add->get_autob(), op::AutoBroadcastType::NUMPY);
|
||||
|
||||
add = make_shared<op::Add>(a, b, "NONE");
|
||||
add = make_shared<op::v1::Add>(a, b, "NONE");
|
||||
ASSERT_EQ(add->get_autob(), op::AutoBroadcastType::NONE);
|
||||
|
||||
add = make_shared<op::Add>(a, b, "PDPD");
|
||||
add = make_shared<op::v1::Add>(a, b, "PDPD");
|
||||
ASSERT_EQ(add->get_autob(), op::AutoBroadcastType::PDPD);
|
||||
|
||||
add = make_shared<op::Add>(a, b, "EXPLICIT");
|
||||
add = make_shared<op::v1::Add>(a, b, "EXPLICIT");
|
||||
ASSERT_EQ(add->get_autob(), op::AutoBroadcastType::EXPLICIT);
|
||||
|
||||
try
|
||||
{
|
||||
add = make_shared<op::Add>(a, b, "UNKNOWN");
|
||||
add = make_shared<op::v1::Add>(a, b, "UNKNOWN");
|
||||
FAIL() << "Unknown AutoBroadcastType not detected.";
|
||||
}
|
||||
catch (const ngraph_error& error)
|
||||
|
@ -33,8 +33,6 @@
|
||||
#include "util/test_control.hpp"
|
||||
#include "util/test_tools.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -45,7 +43,7 @@ NGRAPH_TEST(${BACKEND_NAME}, equal)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Equal>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Equal>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -66,7 +64,7 @@ NGRAPH_TEST(${BACKEND_NAME}, notequal)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::NotEqual>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::NotEqual>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -87,7 +85,7 @@ NGRAPH_TEST(${BACKEND_NAME}, greater)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Greater>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Greater>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -108,7 +106,7 @@ NGRAPH_TEST(${BACKEND_NAME}, greater_int64)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::i64, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::i64, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Greater>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Greater>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -129,7 +127,7 @@ NGRAPH_TEST(${BACKEND_NAME}, greatereq)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::GreaterEq>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::GreaterEqual>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -150,7 +148,7 @@ NGRAPH_TEST(${BACKEND_NAME}, less)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Less>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Less>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -171,7 +169,7 @@ NGRAPH_TEST(${BACKEND_NAME}, lesseq)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::LessEq>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::LessEqual>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -192,7 +190,7 @@ NGRAPH_TEST(${BACKEND_NAME}, lesseq_int32)
|
||||
Shape shape{2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::LessEq>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::LessEqual>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -213,7 +211,7 @@ NGRAPH_TEST(${BACKEND_NAME}, lesseq_bool)
|
||||
Shape shape{2, 2, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::boolean, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::boolean, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::LessEq>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::LessEqual>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
|
@ -291,11 +291,11 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_2d_tensor)
|
||||
Shape shape{1, 1};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add1 = make_shared<op::Add>(A, B);
|
||||
auto add1 = make_shared<op::v1::Add>(A, B);
|
||||
auto C = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto D = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add2 = make_shared<op::Add>(C, D);
|
||||
auto subtract = make_shared<op::Subtract>(C, A);
|
||||
auto add2 = make_shared<op::v1::Add>(C, D);
|
||||
auto subtract = make_shared<op::v1::Subtract>(C, A);
|
||||
Shape shape_r{3, 1};
|
||||
auto f = make_shared<Function>(make_shared<op::Concat>(NodeVector{add1, add2, subtract}, 0),
|
||||
ParameterVector{A, B, C, D});
|
||||
@ -324,12 +324,12 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_propagate_2d_tensor)
|
||||
Shape shape{1, 1};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add1 = make_shared<op::Add>(A, B);
|
||||
auto add1 = make_shared<op::v1::Add>(A, B);
|
||||
auto C = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto D = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add2 = make_shared<op::Add>(C, D);
|
||||
auto add2 = make_shared<op::v1::Add>(C, D);
|
||||
auto concat1 = make_shared<op::Concat>(NodeVector{add1, add2}, 0);
|
||||
auto subtract = make_shared<op::Subtract>(C, A);
|
||||
auto subtract = make_shared<op::v1::Subtract>(C, A);
|
||||
Shape shape_r{3, 1};
|
||||
auto f = make_shared<Function>(make_shared<op::Concat>(NodeVector{concat1, subtract}, 0),
|
||||
ParameterVector{A, B, C, D});
|
||||
@ -359,10 +359,10 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_tree_1)
|
||||
Shape shape_r{1, 4, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add1 = make_shared<op::Add>(A, B);
|
||||
auto add2 = make_shared<op::Add>(A, B);
|
||||
auto add1 = make_shared<op::v1::Add>(A, B);
|
||||
auto add2 = make_shared<op::v1::Add>(A, B);
|
||||
auto concat = make_shared<op::Concat>(NodeVector{add1, add2}, 1);
|
||||
auto f = make_shared<Function>(make_shared<op::Add>(concat, concat), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Add>(concat, concat), ParameterVector{A, B});
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
// Create some tensors for input/output
|
||||
auto a = backend->create_tensor(element::Type_t::f32, shape);
|
||||
@ -385,12 +385,13 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_tree_2)
|
||||
Shape shape_r{1, 8, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add1 = make_shared<op::Add>(A, B);
|
||||
auto add2 = make_shared<op::Add>(A, B);
|
||||
auto add1 = make_shared<op::v1::Add>(A, B);
|
||||
auto add2 = make_shared<op::v1::Add>(A, B);
|
||||
auto concat1 = make_shared<op::Concat>(NodeVector{add1, add2}, 1);
|
||||
auto concat2 = make_shared<op::Concat>(NodeVector{add1, add2}, 1);
|
||||
auto concat12 = make_shared<op::Concat>(NodeVector{concat1, concat2}, 1);
|
||||
auto f = make_shared<Function>(make_shared<op::Add>(concat12, concat12), ParameterVector{A, B});
|
||||
auto f =
|
||||
make_shared<Function>(make_shared<op::v1::Add>(concat12, concat12), ParameterVector{A, B});
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
// Create some tensors for input/output
|
||||
@ -420,7 +421,8 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_tree_3)
|
||||
auto concat12 = make_shared<op::Concat>(NodeVector{concat1, concat2}, 1);
|
||||
auto concat34 = make_shared<op::Concat>(NodeVector{concat3, concat4}, 1);
|
||||
auto concat14 = make_shared<op::Concat>(NodeVector{concat12, concat34}, 1);
|
||||
auto f = make_shared<Function>(make_shared<op::Add>(concat14, concat14), ParameterVector{A, B});
|
||||
auto f =
|
||||
make_shared<Function>(make_shared<op::v1::Add>(concat14, concat14), ParameterVector{A, B});
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
// Create some tensors for input/output
|
||||
auto a = backend->create_tensor(element::Type_t::f32, shape);
|
||||
@ -442,10 +444,10 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_add_concat)
|
||||
Shape shape_r{4, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add1 = make_shared<op::Add>(A, B);
|
||||
auto add2 = make_shared<op::Add>(add1, add1);
|
||||
auto add1 = make_shared<op::v1::Add>(A, B);
|
||||
auto add2 = make_shared<op::v1::Add>(add1, add1);
|
||||
auto concat = make_shared<op::Concat>(NodeVector{add1, add2}, 0);
|
||||
auto add3 = make_shared<op::Add>(concat, concat);
|
||||
auto add3 = make_shared<op::v1::Add>(concat, concat);
|
||||
auto f = make_shared<Function>(add3, ParameterVector{A, B});
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -466,17 +468,17 @@ NGRAPH_TEST(${BACKEND_NAME}, concat_in_place_add_concat_2)
|
||||
Shape shape_r{1, 6, 2};
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto add1 = make_shared<op::Add>(A, B);
|
||||
auto add2 = make_shared<op::Add>(A, B);
|
||||
auto add3 = make_shared<op::Add>(A, B);
|
||||
auto add4 = make_shared<op::Add>(A, B);
|
||||
auto add5 = make_shared<op::Add>(A, B);
|
||||
auto add1 = make_shared<op::v1::Add>(A, B);
|
||||
auto add2 = make_shared<op::v1::Add>(A, B);
|
||||
auto add3 = make_shared<op::v1::Add>(A, B);
|
||||
auto add4 = make_shared<op::v1::Add>(A, B);
|
||||
auto add5 = make_shared<op::v1::Add>(A, B);
|
||||
|
||||
auto concat1 = make_shared<op::Concat>(NodeVector{add1, add2, add3}, 1);
|
||||
|
||||
auto concat2 = make_shared<op::Concat>(NodeVector{add4, add2, add5}, 1);
|
||||
|
||||
auto add6 = make_shared<op::Add>(concat1, concat2);
|
||||
auto add6 = make_shared<op::v1::Add>(concat1, concat2);
|
||||
auto f = make_shared<Function>(add6, ParameterVector{A, B});
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
|
@ -175,11 +175,11 @@ NGRAPH_TEST(${BACKEND_NAME}, constant_equality_bool)
|
||||
Shape shape{4};
|
||||
// auto A = make_shared<op::Parameter>(element::Type_t::boolean, shape);
|
||||
// auto B = make_shared<op::Parameter>(element::Type_t::boolean, shape);
|
||||
// auto f = make_shared<Function>(make_shared<op::Equal>(A, B), ParameterVector{A, B});
|
||||
// auto f = make_shared<Function>(make_shared<op::v1::Equal>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto A = op::Constant::create(element::Type_t::boolean, shape, {true, false, true, false});
|
||||
auto B = op::Constant::create(element::Type_t::boolean, shape, {true, true, true, true});
|
||||
auto f = make_shared<Function>(make_shared<op::Equal>(A, B), ParameterVector{});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Equal>(A, B), ParameterVector{});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
|
@ -17,7 +17,6 @@
|
||||
#include "gtest/gtest.h"
|
||||
#include "ngraph/ngraph.hpp"
|
||||
#include "ngraph/runtime/tensor.hpp"
|
||||
#include "op/convolution.hpp"
|
||||
#include "runtime/backend.hpp"
|
||||
#include "util/all_close.hpp"
|
||||
#include "util/all_close_f.hpp"
|
||||
@ -38,20 +37,10 @@ NGRAPH_TEST(${BACKEND_NAME}, convolution_outlining)
|
||||
Shape shape_b{2, 2, 1, 1};
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape_b);
|
||||
Shape shape_r{1, 2, 2, 2};
|
||||
auto conv1 = make_shared<op::v0::Convolution>(A,
|
||||
B,
|
||||
Strides{1, 1},
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{0, 0},
|
||||
CoordinateDiff{0, 0},
|
||||
Strides{1, 1});
|
||||
auto conv2 = make_shared<op::v0::Convolution>(conv1,
|
||||
B,
|
||||
Strides{1, 1},
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{0, 0},
|
||||
CoordinateDiff{0, 0},
|
||||
Strides{1, 1});
|
||||
auto conv1 = make_shared<op::v1::Convolution>(
|
||||
A, B, Strides{1, 1}, CoordinateDiff{0, 0}, CoordinateDiff{0, 0}, Strides{1, 1});
|
||||
auto conv2 = make_shared<op::v1::Convolution>(
|
||||
conv1, B, Strides{1, 1}, CoordinateDiff{0, 0}, CoordinateDiff{0, 0}, Strides{1, 1});
|
||||
auto f = make_shared<Function>(conv2, ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
@ -77,13 +66,8 @@ NGRAPH_TEST(${BACKEND_NAME}, convolution_simple)
|
||||
Shape shape_b{2, 2, 1, 1};
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape_b);
|
||||
Shape shape_r{1, 2, 2, 2};
|
||||
auto conv1 = make_shared<op::v0::Convolution>(A,
|
||||
B,
|
||||
Strides{1, 1},
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{0, 0},
|
||||
CoordinateDiff{0, 0},
|
||||
Strides{1, 1});
|
||||
auto conv1 = make_shared<op::v1::Convolution>(
|
||||
A, B, Strides{1, 1}, CoordinateDiff{0, 0}, CoordinateDiff{0, 0}, Strides{1, 1});
|
||||
|
||||
auto f = make_shared<Function>(conv1, ParameterVector{A, B});
|
||||
|
||||
@ -110,13 +94,8 @@ NGRAPH_TEST(${BACKEND_NAME}, convolution_simple_padding)
|
||||
Shape shape_b{1, 1, 1, 1};
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape_b);
|
||||
Shape shape_r{1, 1, 5, 5};
|
||||
auto conv1 = make_shared<op::v0::Convolution>(A,
|
||||
B,
|
||||
Strides{1, 1},
|
||||
Strides{1, 1},
|
||||
CoordinateDiff{1, 1},
|
||||
CoordinateDiff{2, 2},
|
||||
Strides{1, 1});
|
||||
auto conv1 = make_shared<op::v1::Convolution>(
|
||||
A, B, Strides{1, 1}, CoordinateDiff{1, 1}, CoordinateDiff{2, 2}, Strides{1, 1});
|
||||
|
||||
auto f = make_shared<Function>(conv1, ParameterVector{A, B});
|
||||
|
||||
|
@ -41,8 +41,6 @@
|
||||
#include "util/test_control.hpp"
|
||||
#include "util/test_tools.hpp"
|
||||
|
||||
NGRAPH_SUPPRESS_DEPRECATED_START
|
||||
|
||||
using namespace std;
|
||||
using namespace ngraph;
|
||||
|
||||
@ -54,7 +52,7 @@ NGRAPH_TEST(${BACKEND_NAME}, divide)
|
||||
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Divide>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Divide>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -76,7 +74,7 @@ NGRAPH_TEST(${BACKEND_NAME}, divide_int32)
|
||||
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Divide>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Divide>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -98,7 +96,7 @@ NGRAPH_TEST(${BACKEND_NAME}, divide_cpp_rounding_int32)
|
||||
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Divide>(A, B, false), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Divide>(A, B, false), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -120,7 +118,7 @@ NGRAPH_TEST(${BACKEND_NAME}, divide_python_rounding_int32)
|
||||
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::i32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Divide>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Divide>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -142,7 +140,7 @@ NGRAPH_TEST(${BACKEND_NAME}, divide_overload)
|
||||
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(A / B, ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Divide>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
@ -164,7 +162,7 @@ NGRAPH_TEST(${BACKEND_NAME}, divide_by_zero_float32)
|
||||
|
||||
auto A = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto B = make_shared<op::Parameter>(element::Type_t::f32, shape);
|
||||
auto f = make_shared<Function>(make_shared<op::Divide>(A, B), ParameterVector{A, B});
|
||||
auto f = make_shared<Function>(make_shared<op::v1::Divide>(A, B), ParameterVector{A, B});
|
||||
|
||||
auto backend = runtime::Backend::create("${BACKEND_NAME}");
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user