* add multimodel memleaks test runner * Add support for is_equal_data, get_source_tensor, get_target_tensor methods in ONNX FE API (#6991) * Change ngraph public api (#6920) * Moved nGraph function * Added legacy nGraph function * Moved export API * Moved variant * Added old header for backward compatibility * Introduce define * [LPT] LP Transformations refactoring after dynamic shapes support (#6950) * [LPT] Transformations refactoring after dynamic shapes support * [LPT] ReshapeTransformation: 2D->2D fix * [LPT] fixes after review * [GPU] Fix ScatterNDUpdate unit tests (#7103) * Deprecate ngraph file utils. Need to have common functions (#7105) * [GPU] Get rid of memory alloc for input_layout in internal networks (#6897) * Renamed component name (#7110) * Propose new MaxPool-8 operation (#5359) * MaxPool-8: pads_value attribute removal from the operator definition (#7119) * Remove deprecated option and enable compilation without device (#6022) * Build openvino wheel package from setup.py (#7091) * Added ability to build wheel package by executing `setup.py bdist_wheel` * fix linter issues * fix formating * remove blank line * Support unregistered operations in MO IR Reader (#6837) * Add support for unregistred operations in MO IR Reader * Remove commented lines * Add shapes equality check * Update comments * Update groupconv_to_conv function to support case with multiple destinations * Add ir_data_attrs attribute to restored layers * Update copy_shape_infer function to new graph api * Add attribute IE to unsuppurted operations to save their attributes * Fix wrong attribute name * Update commentary * Partially revert updating to new Graph API to fix regression, add appropriate comments * Update code comments * Rename copy_shape_infer function and add more comments * First stage of the AUTO-MULTI merge: redirecting AUTO to the MULTI device plugin (#7037) * Redirect -d AUTO to MULTI device plugin Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Modify AUTO tests with MULTI config Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Fix CI Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Fix bug: CVS-62424 Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Add some tests for AUTO Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Add select device logic to MULTI Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Fix extract device name bug Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Address reviewer's comment Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * Delete AUTO plugin source code Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com> * [CPU] Deform. conv. - reference enforced (#6945) * [CPU] Bump up MKLDNN version to get fix of c26453 warning (#7089) * OpenVINO ONNX CI Azure - update onnx/models on demand only (#7125) * [CPU] Added improvements for StridedSlice (#6658) * Removal of FusedOp inheritance leftovers (#7113) * Remove FusedOp from v0::Gelu * Update v0::Gelu NGRAPH_RTTI_DECLARATION * Enable gelu type_prop tests * Remove FusedOp from v0::MVN * Remove FusedOp from HardSigmoid * Remove FusedOp from LSTMSequence * Remove supress deprecated * Add missed NGRAPH_OP_SCOPE to v0 Gelu and HardSigmoid * Move ngraph::element::Type to ov namespace (#7124) * Moved ngraph::Type -> ov::Type * Revert original files * [GNA] Fix order of SwapMatMulInput transformations (#7137) * Moved Dimension, PartialShape, Interval, Rank to ov namespace (#7136) * Mo implementation for If with tf extractor (#6662) * Add tf2.x impl for If * Fix ir_engine * Fix opset * Fix BOM file * Added new test * Fix comments * Add subgraph_utils * Fix comments * Fix transform * code refactoring * Fix description * rewrite support for empty tensor in if * added onnx extractor * delete onnx_if * fix bug with fake_outputs * Fix test * Fix control_flow and fix commentaries * create method results_mapping_and_finding_fake_outputs(output_nodes_in_subgraph, * [GPU] Fixed 'assigned to self' error in loop_inst.h (#7126) * [GPU] Fix build for gcc 10 (#7142) * [GNA] Set input scale factors for imported model (#7139) * add doc:'Paddle_Support.md' (#7122) * add doc:'Paddle_Support.md' * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Apply suggestions from code review * Update docs/IE_DG/Paddle_Support.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Remove local configs and it's copying to bin/ for stress tests (#7131) * Moved DEPRECATION macro and ITT domains to ov namespace (#7153) * Moved DEPRECATION macro and ITT domains to ov namespace * Fixed code style * Enable NormalizeL2Fusion and LeakyReluFusion inside MOC (#7096) * Enable NormalizeL2Fusion inside MOC * Fix NormalizewL2 decomposition (KeepDims=True) * Add support for ONNX Crop operator (#6956) * Review/update spec for NotEqual operation (#6797) * Hiding the problem, Validate() changes 'function' * Review/update spec for NotEqual operation * Remove unnecessary edits not related to the ticket * Removing the extra word binary for the short description * Re-writing detailed description * Correcting punctuation docs/ops/comparison/NotEqual_1.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Specifying auto_broadcast in the short description is similar to Equal spec * The range of values for auto_brodcast is similar to Equal spec and includes the missing pdpd Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Moved DiscreteTypeInfo to ov namespace (#7127) * Moved DiscreteTypeInfo to new opset * Revert old header * Fixed code style * Revise CTCLoss OP (#6953) * Add visitor test to CTCLoss * Add CTC Loss SSLT * Add CTC Loss template tests * Use ngraph rtti macros * Code style fix * Enable PriorBoxClustered tests (#7078) * CumSum spec revision (#6966) * Update detailed description * Update exclusive attribute description * Update Inputs/Outpu description * Update types * Update descriptions * Update data input rank info * Added common.hpp file with aliases (#7158) * CumSum reference implementation revision (#6915) * New CumSum implementation init * Unified ndim approach * Move transpose to separate function * Move transpose to original to separate function * Move slice_count calculation to function * Negative axes support * Refactor redundant copy * Changed copy to move * Temp more backend tests * Add const to shape arg * Use span for slices calculation * Remove unused headers * CumSum new ref tests * Add more ref tests * Add all cumsum modes ref tests * new optimized cum_sum reference * Add reverse mode * Optimized cumsum ref * Remove deprecated cumsum backend tests * Add more CumSum reference tests * Simplify CumSum shared layer tests SetUp * Replace auto to size_t in loop * Change static_cast to T{} * [LPT] MarkupCanBeQuantized: handled unsupported concat (#7045) * [LPT] MarkupCanBeQuantized: added check on unsupported concat * [LPT] ConcatTransformation: added test-case with unsupported concat and convolution * [LPT] added test on rtInfo check for unsupported concat * [LPT] ConcatTransformation: added test-case with unsupported axis to plugin tests * CVS-56144 Enable all OMZ scope (#7084) * Install layer tests with CMake (#6892) * add CMakeLists.txt * add copyright docstring * add newline after copyright * set target name * change TARGET to DIRECTORY * Rename layer tests dir to avoid name conflict * cmakelists.txt final version * Change destination to tests\layer_tests_openvino * Add cmake_minimum_required to CMakeLists.txt * Update CMakeLists.txt * ReverseSequence specification refactored (#7112) * ReverseSequence specification refactored * Change attribute description to avoid confusion * Allow seq_lenghts input to be of floating-point precision * MemCheck add INT8 models to pre-commit (#7166) * updated desktop configs with int8 models * updated desktop reference configs with actual values * added commit comments * parametrize proxy (#7174) * Updated list of supported operations. (#6981) * Updated list of supported layers. * Removed Crop, softsign from Kaldi list. * Updated limitations. * Corrected limitations. * Updated limitations. * Added Einsum, corrected Where. * Apply suggestions from code review Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> * [MO] turn on MarkSubGraphsWithCorrectLayout for TF NCHW (#7150) * turned on MarkingSubgraphsWithCorrectLayout for TF NCHW * restricted MarkSubgraphsWithCorrectLayout.py only to TF * added comments why need to MarkSubgraphsWithCorrectLayout even for TF NCHW models * [CPU][TESTS][LPT] MatMulTransformations test-cases removed from skip config (#7181) * Fixed ngraph_onnx_importer compatibility target creation for older cmake (3.10) (#7176) * [GPU] Fix clBuildProgram failure with ssd_mobilnet_v1_coco and batch=256 (#7121) * Fix v0::MVN default constructor (#7175) * [GPU] Fixes for correct MultiDevice plugin and inference request behavior (#7161) * Fix op category section in operations spec (#7130) * add ngraph::pass::LSTMCellDecomposition as mandatory (#7028) * add ngraph::pass::LSTMCellDecomposition as mandatory * move LSTMCellDecomposition just after CommonOptimizations before all convert opset transformations * code review fixes: add flag that prevents some legacy transformations if their ngraph-based analogues were executed * remove isNgraphPassesUsed from ModelQuantizer * cleanups * [IE] Convert to unsigned NMS:0 ->Gather path (#6474) * inserted Convert to unsigned * moved declarations from hpp into cpp, specification corrected * added static const modifier * updated convert specification * minor corrections * split into 3 passes(Init, Propogate, Update), renamed final pass to ConvertNmsGatherPathToUnsigned * added description why transformation is needed * added matcher for several NMS versions, removed TRANSFORMATIONS_API macros from cpp * applied comments: - used GraphRewrite instead of FunctionPass - simplified some expressions - corrected case when Converts output goes to multiple nodes - added to MOC transformations - other minor corrections * removed redundant namespace prefixes * fixed #include <ngraph/pass/graph_rewrite.hpp> * removed matcher_scope, debug code, and redundant dynamic_cast * [nG] [IE] use GatherBase in negative indices resolver (#7145) * updated pattern matcher into GatherBase in negative indices resolver, so that it is triggered in all versions of operation * copy_runtime_info fix * added constant folding * Nested loop (#6710) * initial changes to support nested loop * fixed issues * fixed nested loop extraction * added comments * removed unneeded comments * review fix * added tests * turned off loop tests on GPU * set xfail for TF tests * removed TF test to move it in another repo * fix typo in comment * move duplicated code to separate functions; added asserts * add function for onnx constant creation; add function to create body of loop add comments to test * move main change for nested loop to separate function * install necessary dirs for tests (#7044) * install necessary dirs to tests * rem RUNTIME from install step * fix paths * fix install paths * fix install paths: add destination dirs * add pandas * fix requirements conflict - change pytest version to ~5 * remove comment from requirements.txt * upd numpy version * Added openvino infer request API (#7151) * [CPU] Removed eltwise overhead on execution stage (#6760) * [GNA] For similar records, the pattern length was increased to 4 in the algorithm for determining infinite cycles. (#7165) * for similar records, the pattern length was increased to 4 * Added comments * [CPU] Enable direct copy implementation for u8->u8 reorder. (#7043) * [CPU] Fix not expected No-Preprocess Exception with RGB to BGR conversion (#6954) * [CPU] Avoid inserting additional transpose + reorder after RNN node. (#5921) * [MO] Replacing StridedSlice with Squeeze/Unsqueeze (#6693) * added reinterp_shape parameter to tf ss extractor * removed reinterp_shape * added transformation to replace ss * updated bom * fix for e2e tests * updated a case when shrink_axis_mask and new_axis_mask are both initialized * unittests * added comments * updated graph_condition * comments resolving * updated the case, when shrink_axis_mask and new_axis_mask are both initialized * added layer tests for squeeze/unsqueeze cases * remove case when shrink and new axis masks are both set * [VPU] Added ConvertGather7ToGather1 pass to frontend (#7183) This pr adds ConvertGather7ToGather1 pass to frontend before MergeGatherGatherElements pass, to make it so that when MergeGatherGatherElements is ran, any v7::Gather will be replaced with v1::Gather * [MO] Add transformation for single CTCGreedyDecoder operation (#7023) * Add transformation for single CTCGreedyDecoder operation * Fix style in op specification * Update transformation logic * refactor old tests and add tests for new transformation * Move tf specific front transformations to tf folder * Update transformation logic and comments * Add run_after function and update comments * Add output_sparse_format attribute to extractor * Update transformation conditions and tests * Fix incorrect comment * Move sparse_to_dense_replacer to front/tf folder to fix problems with class registration * Update import * Update output ports handling in transformation * Update test * Fix BOM file * Update pattern for ctcloss transformation * Fix and refactor tests for ctcloss transform * Update transformation conditions * Add support opset11 for gemm normalizer (#6733) * Add support opset11 for gemm normolizer * Add layer test for gemm opset 11 * Fix layer test * Fix layer test * Refactoring according to code review * Fix * Update biases norm * Refactoring matmul norm * Fix accoding to review * Fix alpha parameter * Fix variable naming * Refactoring according to code review * Add support for ONNX RandomUniform and RandomUniformLike ops (#7190) * remove adaptive pool2d shape check in ngraph paddle frontend (#7074) * remove adaptive pool2d shape check in ngraph paddle frontend * add ngraph paddle frontend dynamic pool2d test * Revise ReverseSequence reference implementation (#7117) * ReverseSequence ngraph op shell revision with type_prop tests * Add attribute count check in visitor test * Refactor backend tests to template plugin test with reference values * Rename cpu SLT instances * Add op to list of trusted operations * Rewrite validation check for input type due to backward compatibility * Reference implementation speed up by replacing index function call of CoordinateTransform by precalculated strides * Moved attribute_adapter, attribute_visitor files to ov namespave (#7179) * Fixed nGraph build * Fixed nGraph unit tests * Fixed func tests * Fix some operators * Fixed build * Try to fix specialization in different namespace * Try to fix build * Fixed element_type * memleaks multimodel supporting * revert rebase mistake * add newline at the end of config file * fix log messages * refine memelaks test case class * remove temporary decision designed to save in memory pipeline functions parameters * code consistency * rework example of new memleak tests config format * oop in testcases * mistype * set num of iterations in example test config to previous value * add multiproc stress unit tests * Add more cases * remove unique_ptr test objects saving logic * switch memleak test configs to new format * swith weekly memleak test config to new format * Clarify new get_testdata script arg * clang-format * wrong changes * Add docstring to generateTestsParamsMemLeaks() * add explanation what update_item_for_name() doing * Autodetect stress framework while parsing models * adjust the wording * Shorten test cases names * fix get_testdata for memcheck tests Co-authored-by: Mateusz Bencer <mateusz.bencer@intel.com> Co-authored-by: Ilya Churaev <ilya.churaev@intel.com> Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com> Co-authored-by: Sergey Shlyapnikov <sergey.shlyapnikov@intel.com> Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com> Co-authored-by: Ilya Sharikov <ilya.sharikov@intel.com> Co-authored-by: Michał Karzyński <michal.karzynski@intel.com> Co-authored-by: Tomasz Dołbniak <tomasz.dolbniak@intel.com> Co-authored-by: Daria Mityagina <daria.mityagina@intel.com> Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> Co-authored-by: Anton Chetverikov <Anton.Chetverikov@intel.com> Co-authored-by: Shoujiang Ma <shoujiang.ma@intel.com> Co-authored-by: Yury Gaydaychuk <yury.gaydaychuk@intel.com> Co-authored-by: Vitaliy Urusovskij <vitaliy.urusovskij@intel.com> Co-authored-by: Rafal Blaczkowski <rafal.blaczkowski@intel.com> Co-authored-by: Alexandra Sidorova <alexandra.sidorova@intel.com> Co-authored-by: Katarzyna Mitrus <katarzyna.mitrus@intel.com> Co-authored-by: Elizaveta Lobanova <elizaveta.lobanova@intel.com> Co-authored-by: Eugeny Volosenkov <eugeny.volosenkov@intel.com> Co-authored-by: Paul Youngsoo Ahn <paul.y.ahn@intel.com> Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com> Co-authored-by: Liu Bo <bo4.liu@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Olesya Martinyuk <olesya.martinyuk@intel.com> Co-authored-by: Gleb Kazantaev <gleb.nnstu@gmail.com> Co-authored-by: Nikita Semaev <nikita.semaev@intel.com> Co-authored-by: Bartosz Lesniewski <bartosz.lesniewski@intel.com> Co-authored-by: Anton Pankratv <anton.pankratov@intel.com> Co-authored-by: Anastasiia Urlapova <anastasiia.urlapova@intel.com> Co-authored-by: Gabriele Galiero Casay <gabriele.galiero.casay@intel.com> Co-authored-by: Anastasia Popova <anastasia.popova@intel.com> Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> Co-authored-by: Pavel Esir <pavel.esir@intel.com> Co-authored-by: Maksim Shabunin <maksim.shabunin@gmail.com> Co-authored-by: Andrew Kwangwoong Park <andrew.kwangwoong.park@intel.com> Co-authored-by: Mikhail Letavin <mikhail.letavin@intel.com> Co-authored-by: Evgeny Kotov <evgeny.kotov@intel.com> Co-authored-by: Svetlana Dolinina <svetlana.a.dolinina@intel.com> Co-authored-by: Victor Kuznetsov <victor.kuznetsov@intel.com> Co-authored-by: Dmitrii Khurtin <dmitrii.khurtin@intel.com> Co-authored-by: Ivan Novoselov <ivan.novoselov@intel.com> Co-authored-by: Aleksandr Pertovsky <aleksandr.pertovsky@intel.com> Co-authored-by: Nikolay Shchegolev <nikolay.shchegolev@intel.com> Co-authored-by: Yegor Kruglov <yegor.kruglov@intel.com> Co-authored-by: Polina Brzezinskaya <polina.brzezinskaya@intel.com> Co-authored-by: iliya mironov <iliya.mironov@intel.com> Co-authored-by: mei, yang <yang.mei@intel.com>
143 lines
6.7 KiB
C++
143 lines
6.7 KiB
C++
// Copyright (C) 2018-2021 Intel Corporation
|
|
// SPDX-License-Identifier: Apache-2.0
|
|
//
|
|
|
|
#include "tests_utils.h"
|
|
|
|
#include <gtest/gtest.h>
|
|
#include <map>
|
|
#include <pugixml.hpp>
|
|
#include <string>
|
|
|
|
#define DEBUG_MODE false
|
|
|
|
const pugi::xml_document &Environment::getTestConfig() { return _test_config; }
|
|
|
|
void Environment::setTestConfig(const pugi::xml_document &test_config) { _test_config.reset(test_config); }
|
|
|
|
std::vector<TestCase> generateTestsParams(std::initializer_list<std::string> fields) {
|
|
std::vector<TestCase> tests_cases;
|
|
const pugi::xml_document &test_config = Environment::Instance().getTestConfig();
|
|
|
|
std::vector<int> processes, threads, iterations;
|
|
std::vector<std::string> devices, models, models_names, precisions;
|
|
|
|
pugi::xml_node values;
|
|
for (auto field = fields.begin(); field != fields.end(); field++) {
|
|
if (*field == "processes") {
|
|
values = test_config.child("attributes").child("processes");
|
|
for (pugi::xml_node val = values.first_child(); val; val = val.next_sibling())
|
|
processes.push_back(val.text().as_int());
|
|
} else if (*field == "threads") {
|
|
values = test_config.child("attributes").child("threads");
|
|
for (pugi::xml_node val = values.first_child(); val; val = val.next_sibling())
|
|
threads.push_back(val.text().as_int());
|
|
} else if (*field == "iterations") {
|
|
values = test_config.child("attributes").child("iterations");
|
|
for (pugi::xml_node val = values.first_child(); val; val = val.next_sibling())
|
|
iterations.push_back(val.text().as_int());
|
|
} else if (*field == "devices") {
|
|
values = test_config.child("attributes").child("devices");
|
|
for (pugi::xml_node val = values.first_child(); val; val = val.next_sibling())
|
|
devices.push_back(val.text().as_string());
|
|
} else if (*field == "models") {
|
|
values = test_config.child("attributes").child("models");
|
|
for (pugi::xml_node val = values.first_child(); val; val = val.next_sibling()) {
|
|
std::string full_path = val.attribute("full_path").as_string();
|
|
std::string path = val.attribute("path").as_string();
|
|
if (full_path.empty() || path.empty())
|
|
throw std::logic_error("One of the 'model' records from test config doesn't contain 'full_path' or "
|
|
"'path' attributes");
|
|
else {
|
|
models.push_back(full_path);
|
|
models_names.push_back(path);
|
|
}
|
|
std::string precision = val.attribute("precision").as_string();
|
|
precisions.push_back(precision);
|
|
}
|
|
}
|
|
}
|
|
|
|
// Initialize variables with default value if it weren't filled
|
|
processes = !processes.empty() ? processes : std::vector<int>{1};
|
|
threads = !threads.empty() ? threads : std::vector<int>{1};
|
|
iterations = !iterations.empty() ? iterations : std::vector<int>{1};
|
|
devices = !devices.empty() ? devices : std::vector<std::string>{"NULL"};
|
|
models = !models.empty() ? models : std::vector<std::string>{"NULL"};
|
|
precisions = !precisions.empty() ? precisions : std::vector<std::string>{"NULL"};
|
|
models_names = !models_names.empty() ? models_names : std::vector<std::string>{"NULL"};
|
|
|
|
for (auto &numprocesses : processes)
|
|
for (auto &numthreads : threads)
|
|
for (auto &numiters : iterations)
|
|
for (auto &device : devices)
|
|
for (int i = 0; i < models.size(); i++)
|
|
tests_cases.push_back(TestCase(numprocesses, numthreads, numiters, device, models[i],
|
|
models_names[i], precisions[i]));
|
|
return tests_cases;
|
|
}
|
|
|
|
// Generate multi-model test cases from config file with static test definition.
|
|
std::vector<MemLeaksTestCase> generateTestsParamsMemLeaks() {
|
|
std::vector<MemLeaksTestCase> tests_cases;
|
|
const pugi::xml_document &test_config = Environment::Instance().getTestConfig();
|
|
|
|
int numprocesses, numthreads, numiterations;
|
|
std::string device_name;
|
|
|
|
pugi::xml_node cases;
|
|
cases = test_config.child("cases");
|
|
|
|
for (pugi::xml_node device = cases.first_child(); device; device = device.next_sibling()) {
|
|
device_name = device.attribute("name").as_string("NULL");
|
|
numprocesses = device.attribute("processes").as_int(1);
|
|
numthreads = device.attribute("threads").as_int(1);
|
|
numiterations = device.attribute("iterations").as_int(1);
|
|
|
|
std::vector<std::map<std::string, std::string>> models;
|
|
|
|
for (pugi::xml_node model = device.first_child(); model; model = model.next_sibling()) {
|
|
std::string full_path = model.attribute("full_path").as_string();
|
|
std::string path = model.attribute("path").as_string();
|
|
if (full_path.empty() || path.empty())
|
|
throw std::logic_error(
|
|
"One of the 'model' records from test config doesn't contain 'full_path' or 'path' attributes");
|
|
std::string name = model.attribute("name").as_string();
|
|
std::string precision = model.attribute("precision").as_string();
|
|
std::map<std::string, std::string> model_map{{"name", name},
|
|
{"path", path},
|
|
{"full_path", full_path},
|
|
{"precision", precision}};
|
|
models.push_back(model_map);
|
|
}
|
|
tests_cases.push_back(MemLeaksTestCase(numprocesses, numthreads, numiterations, device_name, models));
|
|
}
|
|
|
|
return tests_cases;
|
|
}
|
|
|
|
std::string getTestCaseName(const testing::TestParamInfo<TestCase> &obj) {
|
|
return obj.param.test_case_name;
|
|
}
|
|
|
|
std::string getTestCaseNameMemLeaks(const testing::TestParamInfo<MemLeaksTestCase> &obj) {
|
|
return obj.param.test_case_name;
|
|
}
|
|
|
|
void test_wrapper(const std::function<void(std::string, std::string, int)> &tests_pipeline, const TestCase ¶ms) {
|
|
tests_pipeline(params.model, params.device, params.numiters);
|
|
}
|
|
|
|
void _runTest(const std::function<void(std::string, std::string, int)> &tests_pipeline, const TestCase ¶ms) {
|
|
run_in_threads(params.numthreads, test_wrapper, tests_pipeline, params);
|
|
}
|
|
|
|
void runTest(const std::function<void(std::string, std::string, int)> &tests_pipeline, const TestCase ¶ms) {
|
|
#if DEBUG_MODE
|
|
tests_pipeline(params.model, params.device, params.numiters);
|
|
#else
|
|
int status = run_in_processes(params.numprocesses, [&]() { _runTest(tests_pipeline, params); });
|
|
ASSERT_EQ(status, 0) << "Test failed with exitcode " << std::to_string(status);
|
|
#endif
|
|
}
|