mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-11-26 10:51:14 -06:00
Update object_detectors.md (#13994)
* Update object_detectors.md * Use info * Move CPU detector to bottom * Moce CPU to bottom * Add missing detector keys
This commit is contained in:
parent
40fe3b4358
commit
da1478c0c1
@ -5,6 +5,8 @@ title: Object Detectors
|
|||||||
|
|
||||||
# Supported Hardware
|
# Supported Hardware
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
Frigate supports multiple different detectors that work on different types of hardware:
|
Frigate supports multiple different detectors that work on different types of hardware:
|
||||||
|
|
||||||
**Most Hardware**
|
**Most Hardware**
|
||||||
@ -26,37 +28,14 @@ Frigate supports multiple different detectors that work on different types of ha
|
|||||||
**Rockchip**
|
**Rockchip**
|
||||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
||||||
|
|
||||||
# Officially Supported Detectors
|
**For Testing**
|
||||||
|
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
|
||||||
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, `tensorrt`, `rknn`, and `hailo8l`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
|
||||||
|
|
||||||
## CPU Detector (not recommended)
|
|
||||||
|
|
||||||
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
|
|
||||||
|
|
||||||
:::tip
|
|
||||||
|
|
||||||
If you do not have GPU or Edge TPU hardware, using the [OpenVINO Detector](#openvino-detector) is often more efficient than using the CPU detector.
|
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
|
# Officially Supported Detectors
|
||||||
|
|
||||||
A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||||
|
|
||||||
```yaml
|
|
||||||
detectors:
|
|
||||||
cpu1:
|
|
||||||
type: cpu
|
|
||||||
num_threads: 3
|
|
||||||
model:
|
|
||||||
path: "/custom_model.tflite"
|
|
||||||
cpu2:
|
|
||||||
type: cpu
|
|
||||||
num_threads: 3
|
|
||||||
```
|
|
||||||
|
|
||||||
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
|
|
||||||
|
|
||||||
## Edge TPU Detector
|
## Edge TPU Detector
|
||||||
|
|
||||||
@ -418,7 +397,7 @@ After placing the downloaded onnx model in your config folder, you can use the f
|
|||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
detectors:
|
detectors:
|
||||||
onnx:
|
rocm:
|
||||||
type: rocm
|
type: rocm
|
||||||
|
|
||||||
model:
|
model:
|
||||||
@ -484,6 +463,34 @@ model:
|
|||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||||
|
|
||||||
|
## CPU Detector (not recommended)
|
||||||
|
|
||||||
|
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
|
||||||
|
|
||||||
|
:::danger
|
||||||
|
|
||||||
|
The CPU detector is not recommended for general use. If you do not have GPU or Edge TPU hardware, using the [OpenVINO Detector](#openvino-detector) in CPU mode is often more efficient than using the CPU detector.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
|
||||||
|
|
||||||
|
A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
detectors:
|
||||||
|
cpu1:
|
||||||
|
type: cpu
|
||||||
|
num_threads: 3
|
||||||
|
model:
|
||||||
|
path: "/custom_model.tflite"
|
||||||
|
cpu2:
|
||||||
|
type: cpu
|
||||||
|
num_threads: 3
|
||||||
|
```
|
||||||
|
|
||||||
|
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
|
||||||
|
|
||||||
## Deepstack / CodeProject.AI Server Detector
|
## Deepstack / CodeProject.AI Server Detector
|
||||||
|
|
||||||
The Deepstack / CodeProject.AI Server detector for Frigate allows you to integrate Deepstack and CodeProject.AI object detection capabilities into Frigate. CodeProject.AI and DeepStack are open-source AI platforms that can be run on various devices such as the Raspberry Pi, Nvidia Jetson, and other compatible hardware. It is important to note that the integration is performed over the network, so the inference times may not be as fast as native Frigate detectors, but it still provides an efficient and reliable solution for object detection and tracking.
|
The Deepstack / CodeProject.AI Server detector for Frigate allows you to integrate Deepstack and CodeProject.AI object detection capabilities into Frigate. CodeProject.AI and DeepStack are open-source AI platforms that can be run on various devices such as the Raspberry Pi, Nvidia Jetson, and other compatible hardware. It is important to note that the integration is performed over the network, so the inference times may not be as fast as native Frigate detectors, but it still provides an efficient and reliable solution for object detection and tracking.
|
||||||
|
Loading…
Reference in New Issue
Block a user