mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-11-26 02:40:44 -06:00
update docs
This commit is contained in:
parent
0914cb71ad
commit
f3db69d975
46
README.md
46
README.md
@ -42,6 +42,7 @@ Example docker-compose:
|
||||
- /dev/bus/usb:/dev/bus/usb
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- <path_to_config>:/config
|
||||
- <path_to_directory_for_clips>:/clips
|
||||
ports:
|
||||
- "5000:5000"
|
||||
environment:
|
||||
@ -128,15 +129,22 @@ automation:
|
||||
- url: http://<ip>:5000/<camera_name>/person/best.jpg
|
||||
caption: A person was detected.
|
||||
```
|
||||
## Debugging Endpoint
|
||||
## HTTP Endpoints
|
||||
A web server is available on port 5000 with the following endpoints.
|
||||
|
||||
Keep in mind the MJPEG endpoint is for debugging only, but should not be used continuously as it will put additional load on the system.
|
||||
|
||||
Access the mjpeg stream at `http://localhost:5000/<camera_name>` and the best snapshot for any object type with at `http://localhost:5000/<camera_name>/<object_name>/best.jpg`
|
||||
### `/<camera_name>`
|
||||
An mjpeg stream for debugging. Keep in mind the mjpeg endpoint is for debugging only and will put additional load on the system when in use.
|
||||
|
||||
You can access a higher resolution mjpeg stream by appending `h=height-in-pixels` to the endpoint. For example `http://localhost:5000/back?h=1080`. You can also increase the FPS by appending `fps=frame-rate` to the URL such as `http://localhost:5000/back?fps=10` or both with `?fps=10&h=1000`
|
||||
|
||||
Debug info is available at `http://localhost:5000/debug/stats`
|
||||
### `/<camera_name>/<object_name>/best.jpg`
|
||||
The best snapshot for any object type. It is a full resolution image by default. You can change the size of the image by appending `h=height-in-pixels` to the endpoint.
|
||||
|
||||
### `/<camera_name>/latest.jpg`
|
||||
The most recent frame that frigate has finished processing. It is a full resolution image by default. You can change the size of the image by appending `h=height-in-pixels` to the endpoint.
|
||||
|
||||
### `/debug/stats`
|
||||
Contains some granular debug info that can be used for sensors in HomeAssistant.
|
||||
|
||||
## MQTT Messages
|
||||
These are the MQTT messages generated by Frigate. The default topic_prefix is `frigate`, but can be changed in the config file.
|
||||
@ -208,18 +216,38 @@ Message published at the start of any tracked object. JSON looks as follows:
|
||||
### frigate/<camera_name>/events/end
|
||||
Same as `frigate/<camera_name>/events/start`, but with an `end_time` property as well.
|
||||
|
||||
## Using a custom model
|
||||
### frigate/<zone_name>/<object_name>
|
||||
Publishes `ON` or `OFF` and is designed to be used a as a binary sensor in HomeAssistant for whether or not that object type is detected in the zone.
|
||||
|
||||
## Using a custom model or labels
|
||||
Models for both CPU and EdgeTPU (Coral) are bundled in the image. You can use your own models with volume mounts:
|
||||
- CPU Model: `/cpu_model.tflite`
|
||||
- EdgeTPU Model: `/edgetpu_model.tflite`
|
||||
- Labels: `/labelmap.txt`
|
||||
|
||||
### Customizing the Labelmap
|
||||
The labelmap can be customized to your needs. A common reason to do this is to combine multiple object types that are easily confused when you don't need to be as granular such as car/truck. You must retain the same number of labels, but you can change the names. To change:
|
||||
|
||||
- Download the [COCO labelmap](https://dl.google.com/coral/canned_models/coco_labels.txt)
|
||||
- Modify the label names as desired. For example, change `7 truck` to `7 car`
|
||||
- Mount the new file at `/labelmap.txt` in the container with an additional volume
|
||||
```
|
||||
-v ./config/labelmap.txt:/labelmap.txt
|
||||
```
|
||||
|
||||
## Masks and limiting detection to a certain area
|
||||
You can create a *bitmap (bmp)* file the same aspect ratio as your camera feed to limit detection to certain areas. The mask works by looking at the bottom center of any bounding box (first image, red dot below) and comparing that to your mask. If that red dot falls on an area of your mask that is black, the detection (and motion) will be ignored. The mask in the second image would limit detection on this camera to only objects that are in the front yard and not the street.
|
||||
|
||||
<a href="docs/example-mask-check-point.png"><img src="docs/example-mask-check-point.png" height="300"></a>
|
||||
<a href="docs/example-mask.bmp"><img src="docs/example-mask.bmp" height="300"></a>
|
||||
<a href="docs/example-mask-overlay.png"><img src="docs/example-mask-overlay.png" height="300"></a>
|
||||
<img src="docs/example-mask-check-point.png" height="300">
|
||||
<img src="docs/example-mask.bmp" height="300">
|
||||
<img src="docs/example-mask-overlay.png" height="300">
|
||||
|
||||
## Zones
|
||||
Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area. See the sample config for details on how to configure.
|
||||
|
||||
During testing, `draw_zones` can be set in the config to tell frigate to draw the zone on the frames so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
|
||||
|
||||
![Zone Example](docs/zone_example.jpg)
|
||||
|
||||
## Tips
|
||||
- Lower the framerate of the video feed on the camera to reduce the CPU usage for capturing the feed. Not as effective, but you can also modify the `take_frame` [configuration](config/config.example.yml) for each camera to only analyze every other frame, or every third frame, etc.
|
||||
|
@ -66,7 +66,7 @@ objects:
|
||||
person:
|
||||
min_area: 5000
|
||||
max_area: 100000
|
||||
threshold: 0.5
|
||||
threshold: 0.8
|
||||
|
||||
zones:
|
||||
#################
|
||||
@ -76,8 +76,8 @@ zones:
|
||||
cameras:
|
||||
front_door:
|
||||
####################
|
||||
# For each camera, a list of x,y coordinates to define the polygon of the zone.
|
||||
# Can also be a comma separated string of all x,y coordinates combined.
|
||||
# For each camera, a list of x,y coordinates to define the polygon of the zone. The top
|
||||
# left corner is 0,0. Can also be a comma separated string of all x,y coordinates combined.
|
||||
# The same zone can exist across multiple cameras if they have overlapping FOVs.
|
||||
# An object is determined to be in the zone based on whether or not the bottom center
|
||||
# of it's bounding box is within the polygon. The polygon must have at least 3 points.
|
||||
@ -96,7 +96,7 @@ zones:
|
||||
person:
|
||||
min_area: 5000
|
||||
max_area: 100000
|
||||
threshold: 0.5
|
||||
threshold: 0.8
|
||||
driveway:
|
||||
cameras:
|
||||
front_door:
|
||||
@ -186,4 +186,4 @@ cameras:
|
||||
person:
|
||||
min_area: 5000
|
||||
max_area: 100000
|
||||
threshold: 0.5
|
||||
threshold: 0.8
|
||||
|
BIN
docs/zone_example.jpg
Normal file
BIN
docs/zone_example.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 73 KiB |
@ -230,7 +230,6 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
###
|
||||
|
||||
# get the zones that are relevant for this camera
|
||||
# TODO: precompute this
|
||||
relevant_zones = [zone for zone, config in self.zone_config.items() if camera in config]
|
||||
for zone in relevant_zones:
|
||||
# create the set of labels in the current frame and previously reported
|
||||
|
Loading…
Reference in New Issue
Block a user