Reflected that plugin on OSX are also .so (#4754)

This commit is contained in:
Ilya Lavrenov
2021-03-12 15:25:43 +03:00
committed by GitHub
parent 6ac849eed3
commit 96933c5598
2 changed files with 3 additions and 3 deletions

View File

@@ -84,7 +84,7 @@ Each device plugin includes a library of optimized implementations to execute kn
execute a custom operation. The custom operation extension is implemented according to the target device:
- Custom Operation CPU Extension
- A compiled shared library (`.so`, `.dylib` or `.dll`) needed by the CPU Plugin for executing the custom operation
- A compiled shared library (`.so` or `.dll`) needed by the CPU Plugin for executing the custom operation
on a CPU. Refer to the [How to Implement Custom CPU Operations](../IE_DG/Extensibility_DG/CPU_Kernel.md) for more
details.
- Custom Operation GPU Extension
@@ -342,7 +342,7 @@ cmake .. -DCMAKE_BUILD_TYPE=Release
make --jobs=$(nproc)
```
The result of this command is a compiled shared library (`.so`, `.dylib` or `.dll`). It should be loaded in the
The result of this command is a compiled shared library (`.so` or `.dll`). It should be loaded in the
application using `Core` class instance method `AddExtension` like this
`core.AddExtension(std::make_shared<Extension>(compiled_library_file_name), "CPU");`.

View File

@@ -38,7 +38,7 @@ This library contains the classes to:
### Plugin Libraries to read a network object ###
Starting from 2020.4 release, Inference Engine introduced a concept of `CNNNetwork` reader plugins. Such plugins can be automatically dynamically loaded by Inference Engine in runtime depending on file format:
* Linux* OS:
* Unix* OS:
- `libinference_engine_ir_reader.so` to read a network from IR
- `libinference_engine_onnx_reader.so` to read a network from ONNX model format
* Windows* OS: