[BENCHMARK_APP] Introduce Numpy array loading for C++ benchmark app/Fix a bug that would cause Python Numpy array loading to fail (#14021)
* [C++/BENCHMARK_APP] Introduce Numpy array loading for C++ benchmark app * [DOCS/BENCHMARK_APP] Update docs to reflect changes, update list of available extensions from OpenCV, align help messages * Update inputs_filling.cpp * Update tools/benchmark_tool/openvino/tools/benchmark/utils/inputs_filling.py Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com> * Update samples/cpp/benchmark_app/inputs_filling.cpp Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com> * [C++/Python] Implement quality-of-life improvements from PR comments * [C++] Fix compilation errors, fix linter output * [C++/PYTHON] Apply requested changes * Update samples/cpp/benchmark_app/main.cpp Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com> * Update samples/cpp/benchmark_app/utils.cpp Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com> * [PYTHON] Separate loading of numpy arrays similar to images * [PYTHON] Remove unnecessary 'Prepare xxx file' print * Update README again because IF OPENCV.. dissapeared for some reason * Update second README with missing IF OPENCV.. * [C++] Remove unnecessary vector print function * [C++ Add Numpy processing function - TODO link it to the tensor filling * Reverse OneDnn plugin modification * [C++] Numpy array loading for C++ * [C++] Add (almost) all missing types of data * Reverse submodule modifications * [C++/PYTHON] Fix compilation errors, clean code * [C++] Modify supported extensions, add numpy checking to utils, add numpy to get_image_info method * Update samples/cpp/benchmark_app/inputs_filling.cpp Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com> * [C++] Fix utils header file to reflect unordered set change * [PYTHON/C++] Fix compilation errors in C++ code, fix Python dynamic shapes numpy loading * [C++] Fix explicit instantiation of NumpyArray reader * [C++] Clang format, minor syntax fixes * [PYTHON/C++] Remove unnecessary data types, introduce a new approach to cast data of different types from format_rt_reader, remove uppercase types from Python precision parameters * [PYTHON] Update README to reflect new precision settings * [PYTHON] Fix README, fix clang format * [C++] Clean headers * [C++] Fix uninitialized variable error * [C++/PYTHON] Fixed choices in Python benchmark, fixed types in C++ benchmark * [C++] Fixed ov::float16conversion, fixed Python types map - removed redundancies * [C++] Add back boolean support * [C++] Fix compilation errors --------- Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com>
This commit is contained in:
@@ -118,6 +118,9 @@ Options:
|
||||
-i <path> Optional. Path to a folder with images and/or binaries or to specific image or binary file.
|
||||
In case of dynamic shapes models with several inputs provide the same number of files for each input (except cases with single file for any input):"input1:1.jpg input2:1.bin", "input1:1.bin,2.bin input2:3.bin input3:4.bin,5.bin ". Also you can pass specific keys for inputs: "random" - for fillling input with random data, "image_info" - for filling input with image size.
|
||||
You should specify either one files set to be used for all inputs (without providing input names) or separate files sets for every input of model (providing inputs names).
|
||||
Currently supported data types: bmp, bin, npy.
|
||||
If OPENCV is enabled, this functionality is extended with the following data types:
|
||||
dib, jpeg, jpg, jpe, jp2, png, pbm, pgm, ppm, sr, ras, tiff, tif.
|
||||
-d <device> Optional. Specify a target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. Use "-d MULTI:<comma-separated_devices_list>" format to specify MULTI plugin. The application looks for a suitable plugin for the specified device.
|
||||
-extensions <absolute_path> Required for custom layers (extensions). Absolute path to a shared library with the kernels implementations.
|
||||
-c <absolute_path> Required for GPU custom kernels. Absolute path to an .xml file with the kernels description.
|
||||
@@ -195,7 +198,7 @@ Options:
|
||||
Running the application with the empty list of options yields the usage message given above and an error message.
|
||||
|
||||
### More information on inputs
|
||||
The benchmark tool supports topologies with one or more inputs. If a topology is not data sensitive, you can skip the input parameter, and the inputs will be filled with random values. If a model has only image input(s), provide a folder with images or a path to an image as input. If a model has some specific input(s) (besides images), please prepare a binary file(s) that is filled with data of appropriate precision and provide a path to it as input. If a model has mixed input types, the input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
|
||||
The benchmark tool supports topologies with one or more inputs. If a topology is not data sensitive, you can skip the input parameter, and the inputs will be filled with random values. If a model has only image input(s), provide a folder with images or a path to an image as input. If a model has some specific input(s) (besides images), please prepare a binary file(s) or numpy array(s) that is filled with data of appropriate precision and provide a path to it as input. If a model has mixed input types, the input folder should contain all required files. Image inputs are filled with image files one by one. Binary inputs are filled with binary inputs one by one.
|
||||
|
||||
## <a name="examples-of-running-the-tool-cpp"></a> Examples of Running the Tool
|
||||
This section provides step-by-step instructions on how to run the Benchmark Tool with the `asl-recognition` model from the [Open Model Zoo](@ref model_zoo) on CPU or GPU devices. It uses random data as the input.
|
||||
|
||||
@@ -27,7 +27,10 @@ static const char input_message[] =
|
||||
" \"image_info\" - for filling input with image size.\n"
|
||||
" You should specify either one files set to be used for all inputs (without "
|
||||
"providing "
|
||||
"input names) or separate files sets for every input of model (providing inputs names).";
|
||||
"input names) or separate files sets for every input of model (providing inputs names).\n"
|
||||
"Currently supported data types: bmp, bin, npy.\n"
|
||||
"If OPENCV is enabled, this functionality is extended with the following data types:\n"
|
||||
"dib, jpeg, jpg, jpe, jp2, png, pbm, pgm, ppm, sr, ras, tiff, tif.";
|
||||
|
||||
/// @brief message for model argument
|
||||
static const char model_message[] =
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
#include <vector>
|
||||
|
||||
#include "format_reader_ptr.h"
|
||||
#include "npy.h"
|
||||
#include "samples/slog.hpp"
|
||||
#include "shared_tensor_allocator.hpp"
|
||||
#include "utils.hpp"
|
||||
|
||||
@@ -93,6 +95,65 @@ ov::Tensor create_tensor_from_image(const std::vector<std::string>& files,
|
||||
return tensor;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
ov::Tensor create_tensor_from_numpy(const std::vector<std::string>& files,
|
||||
size_t inputId,
|
||||
size_t batchSize,
|
||||
const benchmark_app::InputInfo& inputInfo,
|
||||
const std::string& inputName,
|
||||
std::string* filenames_used = nullptr) {
|
||||
size_t tensor_size =
|
||||
std::accumulate(inputInfo.dataShape.begin(), inputInfo.dataShape.end(), 1, std::multiplies<size_t>());
|
||||
auto allocator = std::make_shared<SharedTensorAllocator>(tensor_size * sizeof(T));
|
||||
auto data = reinterpret_cast<T*>(allocator->get_buffer());
|
||||
|
||||
std::vector<std::shared_ptr<unsigned char>> numpy_array_pointers;
|
||||
numpy_array_pointers.reserve(batchSize);
|
||||
|
||||
size_t numpy_batch_size = 1;
|
||||
if (!inputInfo.layout.empty() && ov::layout::has_batch(inputInfo.layout)) {
|
||||
numpy_batch_size = batchSize;
|
||||
} else {
|
||||
slog::warn << inputName
|
||||
<< ": layout is not set or does not contain batch dimension. Assuming that numpy array "
|
||||
"contains data for all batches."
|
||||
<< slog::endl;
|
||||
}
|
||||
|
||||
for (size_t b = 0; b < numpy_batch_size; ++b) {
|
||||
auto inputIndex = (inputId + b) % files.size();
|
||||
if (filenames_used) {
|
||||
*filenames_used += (filenames_used->empty() ? "" : ", ") + files[inputIndex];
|
||||
}
|
||||
FormatReader::ReaderPtr numpy_array_reader(files[inputIndex].c_str());
|
||||
if (numpy_array_reader.get() == nullptr) {
|
||||
slog::warn << "Numpy array " << files[inputIndex] << " cannot be read!" << slog::endl << slog::endl;
|
||||
continue;
|
||||
}
|
||||
|
||||
std::shared_ptr<unsigned char> numpy_array_data_pointer(numpy_array_reader->getData());
|
||||
if (numpy_array_data_pointer) {
|
||||
numpy_array_pointers.push_back(numpy_array_data_pointer);
|
||||
}
|
||||
}
|
||||
|
||||
size_t type_bytes_size = sizeof(T);
|
||||
std::unique_ptr<unsigned char[]> bytes_buffer(new unsigned char[type_bytes_size]);
|
||||
|
||||
for (size_t batch_nr = 0; batch_nr < numpy_batch_size; ++batch_nr) {
|
||||
for (size_t input_tensor_nr = 0; input_tensor_nr < tensor_size; ++input_tensor_nr) {
|
||||
size_t offset = batch_nr * tensor_size + input_tensor_nr;
|
||||
for (size_t byte_nr = 0; byte_nr < type_bytes_size; ++byte_nr) {
|
||||
bytes_buffer.get()[byte_nr] =
|
||||
numpy_array_pointers.at(batch_nr).get()[offset * type_bytes_size + byte_nr];
|
||||
}
|
||||
data[offset] = *((T*)(bytes_buffer.get()));
|
||||
}
|
||||
}
|
||||
|
||||
return ov::Tensor(inputInfo.type, inputInfo.dataShape, ov::Allocator(allocator));
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
ov::Tensor create_tensor_im_info(const std::pair<size_t, size_t>& image_size,
|
||||
size_t batchSize,
|
||||
@@ -154,17 +215,23 @@ ov::Tensor create_tensor_from_binary(const std::vector<std::string>& files,
|
||||
std::ifstream binaryFile(files[inputIndex], std::ios_base::binary | std::ios_base::ate);
|
||||
OPENVINO_ASSERT(binaryFile, "Cannot open ", files[inputIndex]);
|
||||
|
||||
auto fileSize = static_cast<std::size_t>(binaryFile.tellg());
|
||||
binaryFile.seekg(0, std::ios_base::beg);
|
||||
OPENVINO_ASSERT(binaryFile.good(), "Can not read ", files[inputIndex]);
|
||||
auto inputSize = tensor_size * sizeof(T) / binaryBatchSize;
|
||||
OPENVINO_ASSERT(fileSize == inputSize,
|
||||
"File ",
|
||||
files[inputIndex],
|
||||
" contains ",
|
||||
fileSize,
|
||||
" bytes, but the model expects ",
|
||||
inputSize);
|
||||
|
||||
std::string extension = get_extension(files[inputIndex]);
|
||||
if (extension == "bin") {
|
||||
auto fileSize = static_cast<std::size_t>(binaryFile.tellg());
|
||||
binaryFile.seekg(0, std::ios_base::beg);
|
||||
OPENVINO_ASSERT(binaryFile.good(), "Can not read ", files[inputIndex]);
|
||||
OPENVINO_ASSERT(fileSize == inputSize,
|
||||
"File ",
|
||||
files[inputIndex],
|
||||
" contains ",
|
||||
fileSize,
|
||||
" bytes, but the model expects ",
|
||||
inputSize);
|
||||
} else {
|
||||
throw ov::Exception("Unsupported binary file type: " + extension);
|
||||
}
|
||||
|
||||
if (inputInfo.layout != "CN") {
|
||||
binaryFile.read(&data[b * inputSize], inputSize);
|
||||
@@ -208,20 +275,20 @@ ov::Tensor get_image_tensor(const std::vector<std::string>& files,
|
||||
const std::pair<std::string, benchmark_app::InputInfo>& inputInfo,
|
||||
std::string* filenames_used = nullptr) {
|
||||
auto type = inputInfo.second.type;
|
||||
if (type == ov::element::f32) {
|
||||
return create_tensor_from_image<float>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f16) {
|
||||
if (type == ov::element::f16) {
|
||||
return create_tensor_from_image<ov::float16>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f32) {
|
||||
return create_tensor_from_image<float>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f64) {
|
||||
return create_tensor_from_image<double>(files,
|
||||
inputId,
|
||||
@@ -229,6 +296,20 @@ ov::Tensor get_image_tensor(const std::vector<std::string>& files,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i8) {
|
||||
return create_tensor_from_image<int8_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i16) {
|
||||
return create_tensor_from_image<int16_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i32) {
|
||||
return create_tensor_from_image<int32_t>(files,
|
||||
inputId,
|
||||
@@ -243,13 +324,34 @@ ov::Tensor get_image_tensor(const std::vector<std::string>& files,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u8) {
|
||||
} else if ((type == ov::element::u8) || (type == ov::element::boolean)) {
|
||||
return create_tensor_from_image<uint8_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u16) {
|
||||
return create_tensor_from_image<uint16_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u32) {
|
||||
return create_tensor_from_image<uint32_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u64) {
|
||||
return create_tensor_from_image<uint64_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else {
|
||||
throw ov::Exception("Input type is not supported for " + inputInfo.first);
|
||||
}
|
||||
@@ -259,16 +361,116 @@ ov::Tensor get_im_info_tensor(const std::pair<size_t, size_t>& image_size,
|
||||
size_t batchSize,
|
||||
const std::pair<std::string, benchmark_app::InputInfo>& inputInfo) {
|
||||
auto type = inputInfo.second.type;
|
||||
if (type == ov::element::f32) {
|
||||
if (type == ov::element::f16) {
|
||||
return create_tensor_im_info<ov::float16>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::f32) {
|
||||
return create_tensor_im_info<float>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::f64) {
|
||||
return create_tensor_im_info<double>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::f16) {
|
||||
return create_tensor_im_info<short>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::i8) {
|
||||
return create_tensor_im_info<int8_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::i16) {
|
||||
return create_tensor_im_info<int16_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::i32) {
|
||||
return create_tensor_im_info<int32_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::i64) {
|
||||
return create_tensor_im_info<int64_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if ((type == ov::element::u8) || (type == ov::element::boolean)) {
|
||||
return create_tensor_im_info<uint8_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::u16) {
|
||||
return create_tensor_im_info<uint16_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::u32) {
|
||||
return create_tensor_im_info<uint32_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else if (type == ov::element::u64) {
|
||||
return create_tensor_im_info<uint64_t>(image_size, batchSize, inputInfo.second, inputInfo.first);
|
||||
} else {
|
||||
throw ov::Exception("Input type is not supported for " + inputInfo.first);
|
||||
}
|
||||
}
|
||||
|
||||
ov::Tensor get_numpy_tensor(const std::vector<std::string>& files,
|
||||
size_t inputId,
|
||||
size_t batchSize,
|
||||
const std::pair<std::string, benchmark_app::InputInfo>& inputInfo,
|
||||
std::string* filenames_used = nullptr) {
|
||||
auto type = inputInfo.second.type;
|
||||
if (type == ov::element::f16) {
|
||||
return create_tensor_from_numpy<ov::float16>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f32) {
|
||||
return create_tensor_from_numpy<float>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f64) {
|
||||
return create_tensor_from_numpy<double>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i8) {
|
||||
return create_tensor_from_numpy<int8_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i16) {
|
||||
return create_tensor_from_numpy<int16_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i32) {
|
||||
return create_tensor_from_numpy<int32_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i64) {
|
||||
return create_tensor_from_numpy<int64_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if ((type == ov::element::u8) || (type == ov::element::boolean)) {
|
||||
return create_tensor_from_numpy<uint8_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u16) {
|
||||
return create_tensor_from_numpy<uint16_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u32) {
|
||||
return create_tensor_from_numpy<uint32_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u64) {
|
||||
return create_tensor_from_numpy<uint64_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else {
|
||||
throw ov::Exception("Input type is not supported for " + inputInfo.first);
|
||||
}
|
||||
@@ -280,7 +482,14 @@ ov::Tensor get_binary_tensor(const std::vector<std::string>& files,
|
||||
const std::pair<std::string, benchmark_app::InputInfo>& inputInfo,
|
||||
std::string* filenames_used = nullptr) {
|
||||
const auto& type = inputInfo.second.type;
|
||||
if (type == ov::element::f32) {
|
||||
if (type == ov::element::f16) {
|
||||
return create_tensor_from_binary<ov::float16>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f32) {
|
||||
return create_tensor_from_binary<float>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
@@ -294,13 +503,20 @@ ov::Tensor get_binary_tensor(const std::vector<std::string>& files,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::f16) {
|
||||
return create_tensor_from_binary<short>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i8) {
|
||||
return create_tensor_from_binary<int8_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i16) {
|
||||
return create_tensor_from_binary<int16_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::i32) {
|
||||
return create_tensor_from_binary<int32_t>(files,
|
||||
inputId,
|
||||
@@ -322,6 +538,27 @@ ov::Tensor get_binary_tensor(const std::vector<std::string>& files,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u16) {
|
||||
return create_tensor_from_binary<uint16_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u32) {
|
||||
return create_tensor_from_binary<uint32_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else if (type == ov::element::u64) {
|
||||
return create_tensor_from_binary<uint64_t>(files,
|
||||
inputId,
|
||||
batchSize,
|
||||
inputInfo.second,
|
||||
inputInfo.first,
|
||||
filenames_used);
|
||||
} else {
|
||||
throw ov::Exception("Input type is not supported for " + inputInfo.first);
|
||||
}
|
||||
@@ -339,7 +576,7 @@ ov::Tensor get_random_tensor(const std::pair<std::string, benchmark_app::InputIn
|
||||
return create_tensor_random<int32_t, int32_t>(inputInfo.second);
|
||||
} else if (type == ov::element::i64) {
|
||||
return create_tensor_random<int64_t, int64_t>(inputInfo.second);
|
||||
} else if (type == ov::element::u8) {
|
||||
} else if ((type == ov::element::u8) || (type == ov::element::boolean)) {
|
||||
// uniform_int_distribution<uint8_t> is not allowed in the C++17
|
||||
// standard and vs2017/19
|
||||
return create_tensor_random<uint8_t, uint32_t>(inputInfo.second);
|
||||
@@ -403,8 +640,13 @@ std::map<std::string, ov::TensorVector> get_tensors(std::map<std::string, std::v
|
||||
std::string input_name = files.first.empty() ? app_inputs_info[0].begin()->first : files.first;
|
||||
auto input = app_inputs_info[0].at(input_name);
|
||||
if (!files.second.empty() && files.second[0] != "random" && files.second[0] != "image_info") {
|
||||
if (input.is_image()) {
|
||||
files.second = filter_files_by_extensions(files.second, supported_image_extensions);
|
||||
auto filtered_numpy_files = filter_files_by_extensions(files.second, supported_numpy_extensions);
|
||||
auto filtered_image_files = filter_files_by_extensions(files.second, supported_image_extensions);
|
||||
|
||||
if (!filtered_numpy_files.empty()) {
|
||||
files.second = filtered_numpy_files;
|
||||
} else if (!filtered_image_files.empty() && input.is_image()) {
|
||||
files.second = filtered_image_files;
|
||||
} else if (input.is_image_info() && net_input_im_sizes.size() == app_inputs_info.size()) {
|
||||
slog::info << "Input '" << input_name
|
||||
<< "' probably is image info. All files for this input will"
|
||||
@@ -486,8 +728,9 @@ std::map<std::string, ov::TensorVector> get_tensors(std::map<std::string, std::v
|
||||
std::string tensor_src_info;
|
||||
if (files.second[0] == "random") {
|
||||
// Fill random
|
||||
tensor_src_info =
|
||||
"random (" + std::string((input_info.is_image() ? "image" : "binary data")) + " is expected)";
|
||||
tensor_src_info = "random (" +
|
||||
std::string((input_info.is_image() ? "image/numpy array" : "binary data")) +
|
||||
" is expected)";
|
||||
tensors[input_name].push_back(get_random_tensor({input_name, input_info}));
|
||||
} else if (files.second[0] == "image_info") {
|
||||
// Most likely it is image info: fill with image information
|
||||
@@ -495,6 +738,10 @@ std::map<std::string, ov::TensorVector> get_tensors(std::map<std::string, std::v
|
||||
tensor_src_info =
|
||||
"Image size tensor " + std::to_string(image_size.first) + " x " + std::to_string(image_size.second);
|
||||
tensors[input_name].push_back(get_im_info_tensor(image_size, batchSize, {input_name, input_info}));
|
||||
} else if (supported_numpy_extensions.count(get_extension(files.second[0]))) {
|
||||
// Fill with Numpy arrrays
|
||||
tensors[input_name].push_back(
|
||||
get_numpy_tensor(files.second, inputId, batchSize, {input_name, input_info}, &tensor_src_info));
|
||||
} else if (input_info.is_image()) {
|
||||
// Fill with Images
|
||||
tensors[input_name].push_back(
|
||||
@@ -549,45 +796,26 @@ std::map<std::string, ov::TensorVector> get_tensors_static_case(const std::vecto
|
||||
}
|
||||
}
|
||||
|
||||
size_t imageInputsNum = net_input_im_sizes.size();
|
||||
size_t binaryInputsNum = app_inputs_info.size() - imageInputsNum;
|
||||
std::vector<std::string> binaryFiles = filter_files_by_extensions(inputFiles, supported_binary_extensions);
|
||||
std::vector<std::string> numpyFiles = filter_files_by_extensions(inputFiles, supported_numpy_extensions);
|
||||
std::vector<std::string> imageFiles = filter_files_by_extensions(inputFiles, supported_image_extensions);
|
||||
|
||||
std::vector<std::string> binaryFiles;
|
||||
std::vector<std::string> imageFiles;
|
||||
size_t imageInputsNum = imageFiles.size();
|
||||
size_t numpyInputsNum = numpyFiles.size();
|
||||
size_t binaryInputsNum = binaryFiles.size();
|
||||
size_t totalInputsNum = imageInputsNum + numpyInputsNum + binaryInputsNum;
|
||||
|
||||
if (inputFiles.empty()) {
|
||||
slog::warn << "No input files were given: all inputs will be filled with "
|
||||
"random values!"
|
||||
<< slog::endl;
|
||||
} else {
|
||||
binaryFiles = filter_files_by_extensions(inputFiles, supported_binary_extensions);
|
||||
std::sort(std::begin(binaryFiles), std::end(binaryFiles));
|
||||
|
||||
auto binaryToBeUsed = binaryInputsNum * batchSize * requestsNum;
|
||||
if (binaryToBeUsed > 0 && binaryFiles.empty()) {
|
||||
std::stringstream ss;
|
||||
for (auto& ext : supported_binary_extensions) {
|
||||
if (!ss.str().empty()) {
|
||||
ss << ", ";
|
||||
}
|
||||
ss << ext;
|
||||
}
|
||||
slog::warn << "No supported binary inputs found! Please check your file "
|
||||
"extensions: "
|
||||
<< ss.str() << slog::endl;
|
||||
} else if (binaryToBeUsed > binaryFiles.size()) {
|
||||
slog::warn << "Some binary input files will be duplicated: " << binaryToBeUsed
|
||||
<< " files are required but only " << binaryFiles.size() << " are provided" << slog::endl;
|
||||
} else if (binaryToBeUsed < binaryFiles.size()) {
|
||||
slog::warn << "Some binary input files will be ignored: only " << binaryToBeUsed << " are required from "
|
||||
<< binaryFiles.size() << slog::endl;
|
||||
}
|
||||
|
||||
imageFiles = filter_files_by_extensions(inputFiles, supported_image_extensions);
|
||||
std::sort(std::begin(numpyFiles), std::end(numpyFiles));
|
||||
std::sort(std::begin(imageFiles), std::end(imageFiles));
|
||||
|
||||
auto imagesToBeUsed = imageInputsNum * batchSize * requestsNum;
|
||||
if (imagesToBeUsed > 0 && imageFiles.empty()) {
|
||||
auto filesToBeUsed = totalInputsNum * batchSize * requestsNum;
|
||||
if (filesToBeUsed == 0 && !inputFiles.empty()) {
|
||||
std::stringstream ss;
|
||||
for (auto& ext : supported_image_extensions) {
|
||||
if (!ss.str().empty()) {
|
||||
@@ -595,23 +823,43 @@ std::map<std::string, ov::TensorVector> get_tensors_static_case(const std::vecto
|
||||
}
|
||||
ss << ext;
|
||||
}
|
||||
slog::warn << "No supported image inputs found! Please check your file "
|
||||
for (auto& ext : supported_numpy_extensions) {
|
||||
if (!ss.str().empty()) {
|
||||
ss << ", ";
|
||||
}
|
||||
ss << ext;
|
||||
}
|
||||
for (auto& ext : supported_binary_extensions) {
|
||||
if (!ss.str().empty()) {
|
||||
ss << ", ";
|
||||
}
|
||||
ss << ext;
|
||||
}
|
||||
slog::warn << "Inputs of unsupported type found! Please check your file "
|
||||
"extensions: "
|
||||
<< ss.str() << slog::endl;
|
||||
} else if (imagesToBeUsed > imageFiles.size()) {
|
||||
slog::warn << "Some image input files will be duplicated: " << imagesToBeUsed
|
||||
<< " files are required but only " << imageFiles.size() << " are provided" << slog::endl;
|
||||
} else if (imagesToBeUsed < imageFiles.size()) {
|
||||
slog::warn << "Some image input files will be ignored: only " << imagesToBeUsed << " are required from "
|
||||
<< imageFiles.size() << slog::endl;
|
||||
} else if (app_inputs_info.size() > totalInputsNum) {
|
||||
slog::warn << "Some input files will be duplicated: " << filesToBeUsed << " files are required but only "
|
||||
<< totalInputsNum << " are provided" << slog::endl;
|
||||
} else if (filesToBeUsed < app_inputs_info.size()) {
|
||||
slog::warn << "Some input files will be ignored: only " << filesToBeUsed << " are required from "
|
||||
<< totalInputsNum << slog::endl;
|
||||
}
|
||||
}
|
||||
|
||||
std::map<std::string, std::vector<std::string>> mappedFiles;
|
||||
size_t imageInputsCount = 0;
|
||||
size_t numpyInputsCount = 0;
|
||||
size_t binaryInputsCount = 0;
|
||||
for (auto& input : app_inputs_info) {
|
||||
if (input.second.is_image()) {
|
||||
if (numpyInputsNum) {
|
||||
mappedFiles[input.first] = {};
|
||||
for (size_t i = 0; i < numpyFiles.size(); i += numpyInputsNum) {
|
||||
mappedFiles[input.first].push_back(
|
||||
numpyFiles[(numpyInputsCount + i) * numpyInputsNum % numpyFiles.size()]);
|
||||
}
|
||||
++numpyInputsCount;
|
||||
} else if (input.second.is_image()) {
|
||||
mappedFiles[input.first] = {};
|
||||
for (size_t i = 0; i < imageFiles.size(); i += imageInputsNum) {
|
||||
mappedFiles[input.first].push_back(
|
||||
@@ -643,13 +891,26 @@ std::map<std::string, ov::TensorVector> get_tensors_static_case(const std::vecto
|
||||
std::vector<std::map<std::string, std::string>> logOutput(test_configs_num);
|
||||
for (const auto& files : mappedFiles) {
|
||||
size_t imageInputId = 0;
|
||||
size_t numpyInputId = 0;
|
||||
size_t binaryInputId = 0;
|
||||
auto input_name = files.first;
|
||||
auto input_info = app_inputs_info.at(files.first);
|
||||
|
||||
for (size_t i = 0; i < test_configs_num; ++i) {
|
||||
std::string blob_src_info;
|
||||
if (input_info.is_image()) {
|
||||
if (files.second.size() && supported_numpy_extensions.count(get_extension(files.second[0]))) {
|
||||
if (!numpyFiles.empty()) {
|
||||
// Fill with Numpy arryys
|
||||
blobs[input_name].push_back(get_numpy_tensor(files.second,
|
||||
imageInputId,
|
||||
batchSize,
|
||||
{input_name, input_info},
|
||||
&blob_src_info));
|
||||
numpyInputId = (numpyInputId + batchSize) % files.second.size();
|
||||
logOutput[i][input_name] += get_test_info_stream_header(input_info) + blob_src_info;
|
||||
continue;
|
||||
}
|
||||
} else if (input_info.is_image()) {
|
||||
if (!imageFiles.empty()) {
|
||||
// Fill with Images
|
||||
blobs[input_name].push_back(get_image_tensor(files.second,
|
||||
@@ -684,8 +945,8 @@ std::map<std::string, ov::TensorVector> get_tensors_static_case(const std::vecto
|
||||
}
|
||||
}
|
||||
// Fill random
|
||||
blob_src_info =
|
||||
"random (" + std::string((input_info.is_image() ? "image" : "binary data")) + " is expected)";
|
||||
blob_src_info = "random (" + std::string((input_info.is_image() ? "image" : "binary data")) +
|
||||
"/numpy array is expected)";
|
||||
blobs[input_name].push_back(get_random_tensor({input_name, input_info}));
|
||||
logOutput[i][input_name] += get_test_info_stream_header(input_info) + blob_src_info;
|
||||
}
|
||||
|
||||
@@ -686,7 +686,7 @@ int main(int argc, char* argv[]) {
|
||||
|
||||
const auto& inputInfo = std::const_pointer_cast<const ov::Model>(model)->inputs();
|
||||
if (inputInfo.empty()) {
|
||||
throw std::logic_error("no inputs info is provided");
|
||||
throw std::logic_error("No inputs info is provided");
|
||||
}
|
||||
|
||||
// ----------------- 5. Resizing network to match image sizes and given
|
||||
|
||||
@@ -543,8 +543,9 @@ std::vector<benchmark_app::InputsInfo> get_inputs_info(const std::string& shape_
|
||||
}
|
||||
}
|
||||
|
||||
size_t w = 0;
|
||||
size_t h = 0;
|
||||
size_t w = 0;
|
||||
std::vector<size_t> shape;
|
||||
size_t fileIdx = currentFileCounters[item.get_any_name()];
|
||||
for (; fileIdx < currentFileCounters[item.get_any_name()] + tensorBatchSize; fileIdx++) {
|
||||
if (fileIdx >= namesVector.size()) {
|
||||
@@ -553,28 +554,47 @@ std::vector<benchmark_app::InputsInfo> get_inputs_info(const std::string& shape_
|
||||
"size if -data_shape parameter is omitted and shape is dynamic)");
|
||||
}
|
||||
FormatReader::ReaderPtr reader(namesVector[fileIdx].c_str());
|
||||
if ((w && w != reader->width()) || (h && h != reader->height())) {
|
||||
throw std::logic_error("Image sizes putting into one batch should be of the same size if input "
|
||||
"shape is dynamic and -data_shape is omitted. Problem file: " +
|
||||
namesVector[fileIdx]);
|
||||
if ((w && w != reader->width()) || (h && h != reader->height()) ||
|
||||
(!shape.empty() && shape != reader->shape())) {
|
||||
throw std::logic_error(
|
||||
"File dimensions putting into one batch should be of the same dimensionality if input "
|
||||
"shape is dynamic and -data_shape is omitted. Problem file: " +
|
||||
namesVector[fileIdx]);
|
||||
}
|
||||
w = reader->width();
|
||||
h = reader->height();
|
||||
w = reader->width();
|
||||
shape = reader->shape();
|
||||
}
|
||||
currentFileCounters[item.get_any_name()] = fileIdx;
|
||||
|
||||
if (!info.dataShape[ov::layout::height_idx(info.layout)]) {
|
||||
info.dataShape[ov::layout::height_idx(info.layout)] = h;
|
||||
}
|
||||
if (!info.dataShape[ov::layout::width_idx(info.layout)]) {
|
||||
info.dataShape[ov::layout::width_idx(info.layout)] = w;
|
||||
if (shape.size() == 2) { // Has only h and w
|
||||
if (!info.dataShape[ov::layout::height_idx(info.layout)]) {
|
||||
info.dataShape[ov::layout::height_idx(info.layout)] = h;
|
||||
}
|
||||
if (!info.dataShape[ov::layout::width_idx(info.layout)]) {
|
||||
info.dataShape[ov::layout::width_idx(info.layout)] = w;
|
||||
}
|
||||
} else { // Is numpy array
|
||||
size_t shape_idx = 0;
|
||||
if (info.dataShape.size() != shape.size()) {
|
||||
throw std::logic_error("Shape required by the input and file shape do not have the same rank. "
|
||||
"Input: " +
|
||||
item.get_any_name() + ", File name: " + namesVector[fileIdx - 1]);
|
||||
}
|
||||
for (size_t i = ov::layout::batch_idx(info.layout);
|
||||
i < ov::layout::batch_idx(info.layout) + info.dataShape.size();
|
||||
++i) {
|
||||
if (!info.dataShape[i]) {
|
||||
info.dataShape[i] = shape.at(shape_idx);
|
||||
}
|
||||
shape_idx++;
|
||||
}
|
||||
}
|
||||
|
||||
if (std::any_of(info.dataShape.begin(), info.dataShape.end(), [](size_t d) {
|
||||
return d == 0;
|
||||
})) {
|
||||
throw std::logic_error("Not enough information in shape and image to determine tensor shape "
|
||||
"automatically autmatically. Input: " +
|
||||
throw std::logic_error("Not enough information in shape and file to determine tensor shape "
|
||||
"autmatically. Input: " +
|
||||
item.get_any_name() + ", File name: " + namesVector[fileIdx - 1]);
|
||||
}
|
||||
|
||||
@@ -736,14 +756,6 @@ void load_config(const std::string& filename, std::map<std::string, ov::AnyMap>&
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef USE_OPENCV
|
||||
const std::vector<std::string> supported_image_extensions =
|
||||
{"bmp", "dib", "jpeg", "jpg", "jpe", "jp2", "png", "pbm", "pgm", "ppm", "sr", "ras", "tiff", "tif"};
|
||||
#else
|
||||
const std::vector<std::string> supported_image_extensions = {"bmp"};
|
||||
#endif
|
||||
const std::vector<std::string> supported_binary_extensions = {"bin"};
|
||||
|
||||
std::string get_extension(const std::string& name) {
|
||||
auto extensionPosition = name.rfind('.', name.size());
|
||||
return extensionPosition == std::string::npos ? "" : name.substr(extensionPosition + 1, name.size() - 1);
|
||||
@@ -752,36 +764,38 @@ std::string get_extension(const std::string& name) {
|
||||
bool is_binary_file(const std::string& filePath) {
|
||||
auto extension = get_extension(filePath);
|
||||
std::transform(extension.begin(), extension.end(), extension.begin(), ::tolower);
|
||||
return std::find(supported_binary_extensions.begin(), supported_binary_extensions.end(), extension) !=
|
||||
supported_binary_extensions.end();
|
||||
return supported_binary_extensions.find(extension) != supported_binary_extensions.end();
|
||||
}
|
||||
|
||||
bool is_numpy_file(const std::string& filePath) {
|
||||
auto extension = get_extension(filePath);
|
||||
std::transform(extension.begin(), extension.end(), extension.begin(), ::tolower);
|
||||
return supported_numpy_extensions.find(extension) != supported_numpy_extensions.end();
|
||||
}
|
||||
|
||||
bool is_image_file(const std::string& filePath) {
|
||||
auto extension = get_extension(filePath);
|
||||
std::transform(extension.begin(), extension.end(), extension.begin(), ::tolower);
|
||||
return std::find(supported_binary_extensions.begin(), supported_binary_extensions.end(), extension) !=
|
||||
supported_binary_extensions.end();
|
||||
return supported_image_extensions.find(extension) != supported_image_extensions.end();
|
||||
}
|
||||
|
||||
bool contains_binaries(const std::vector<std::string>& filePaths) {
|
||||
std::vector<std::string> filtered;
|
||||
for (auto& filePath : filePaths) {
|
||||
auto extension = get_extension(filePath);
|
||||
std::transform(extension.begin(), extension.end(), extension.begin(), ::tolower);
|
||||
if (std::find(supported_binary_extensions.begin(), supported_binary_extensions.end(), extension) !=
|
||||
supported_binary_extensions.end()) {
|
||||
if (is_binary_file(filePath)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
std::vector<std::string> filter_files_by_extensions(const std::vector<std::string>& filePaths,
|
||||
const std::vector<std::string>& extensions) {
|
||||
const std::unordered_set<std::string>& extensions) {
|
||||
std::vector<std::string> filtered;
|
||||
for (auto& filePath : filePaths) {
|
||||
auto extension = get_extension(filePath);
|
||||
std::transform(extension.begin(), extension.end(), extension.begin(), ::tolower);
|
||||
if (std::find(extensions.begin(), extensions.end(), extension) != extensions.end()) {
|
||||
if (extensions.find(extension) != extensions.end()) {
|
||||
filtered.push_back(filePath);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,8 +10,18 @@
|
||||
#include <openvino/openvino.hpp>
|
||||
#include <samples/slog.hpp>
|
||||
#include <string>
|
||||
#include <unordered_set>
|
||||
#include <vector>
|
||||
|
||||
#ifdef USE_OPENCV
|
||||
const std::unordered_set<std::string> supported_image_extensions =
|
||||
{"bmp", "dib", "jpeg", "jpg", "jpe", "jp2", "png", "pbm", "pgm", "ppm", "sr", "ras", "tiff", "tif"};
|
||||
#else
|
||||
const std::unordered_set<std::string> supported_image_extensions = {"bmp"};
|
||||
#endif
|
||||
const std::unordered_set<std::string> supported_numpy_extensions = {"npy"};
|
||||
const std::unordered_set<std::string> supported_binary_extensions = {"bin"};
|
||||
|
||||
typedef std::chrono::high_resolution_clock Time;
|
||||
typedef std::chrono::nanoseconds ns;
|
||||
|
||||
@@ -117,14 +127,13 @@ std::vector<benchmark_app::InputsInfo> get_inputs_info(const std::string& shape_
|
||||
void dump_config(const std::string& filename, const std::map<std::string, ov::AnyMap>& config);
|
||||
void load_config(const std::string& filename, std::map<std::string, ov::AnyMap>& config);
|
||||
|
||||
extern const std::vector<std::string> supported_image_extensions;
|
||||
extern const std::vector<std::string> supported_binary_extensions;
|
||||
|
||||
std::string get_extension(const std::string& name);
|
||||
bool is_binary_file(const std::string& filePath);
|
||||
bool is_numpy_file(const std::string& filePath);
|
||||
bool is_image_file(const std::string& filePath);
|
||||
bool contains_binaries(const std::vector<std::string>& filePaths);
|
||||
std::vector<std::string> filter_files_by_extensions(const std::vector<std::string>& filePaths,
|
||||
const std::vector<std::string>& extensions);
|
||||
const std::unordered_set<std::string>& extensions);
|
||||
|
||||
std::string parameter_name_to_tensor_name(
|
||||
const std::string& name,
|
||||
|
||||
Reference in New Issue
Block a user