Compare commits

...

117 Commits
v0.15.0 ... dev

Author SHA1 Message Date
Lukas Wolfsteiner
92553fa666
fix: Missing link to go2rtc' logging doc (#16606)
Updated the link to go2rtc docs for logging configuration.
2025-02-15 19:57:59 -06:00
Josh Hawkins
5264a18dfa
Small fixes and docs tweaks (#16595)
* don't clear url params if we're creating a new object mask

* use correct threshold var in debug log

* docs tweak for mjpeg cameras
2025-02-15 06:56:45 -07:00
Josh Hawkins
6bb1a5dfd2
fix renaming exports with a slash (#16588) 2025-02-14 19:18:14 -07:00
Josh Hawkins
7b3556e4ad
ensure we copy current frame to prevent a segfault crash (#16591) 2025-02-14 18:53:07 -06:00
Josh Hawkins
9a07505075
More LPR improvements (#16587)
* define a format option and adjust thresholds

* config updates

* docs

* docs clarity
2025-02-14 15:12:36 -07:00
Josh Hawkins
0b65137831
Return 404 for non-existent vod module media (#16586)
* Check if video source exists before showing player

* add comment

* also check 404

* language

* return 404 with vod module
2025-02-14 12:05:05 -07:00
Nicolas Mowen
761c5109dc
Update nvidia driver req (#16560) 2025-02-13 17:18:38 -06:00
Josh Hawkins
729f5c0833
LPR improvements (#16559)
* use a small yolov9 model for detection

* use yolov9 for users without frigate+ and update retention algorithm

* new lpr config fields

* levenshtein distance package

* tweaks

* docs
2025-02-13 16:08:56 -07:00
Nicolas Mowen
f7199f205f
Update docs for pcie coral driver (#16548) 2025-02-13 11:00:34 -06:00
Josh Hawkins
d6b5dc93cc
Fix streaming dialog and use less text on register button (#16518) 2025-02-12 06:16:32 -07:00
Josh Hawkins
11baf237bc
Ensure all streaming settings are saved correctly on mobile (#16511)
* Ensure streaming settings are saved correctly on mobile

* remove extra check
2025-02-11 16:49:22 -07:00
Nicolas Mowen
73fee6372b
Remove obsolete event clip logic (#16504)
* Remove obsolete event clip logic

* Formatting
2025-02-11 15:16:10 -06:00
Josh Hawkins
2458f667c4
Refactor lpr into real time data processor (#16497) 2025-02-11 13:45:13 -07:00
Josh Hawkins
f3e2cf0a58
Small fix and docs update (#16499)
* Small docs tweak and bugfix

* don't remove page arg either
2025-02-11 13:23:41 -07:00
Nicolas Mowen
0f0b2687af
Add support for YoloV9 to OpenVINO (#16495)
* Add support for yolov9 to OpenVINO

* Cleanup detector docs

* Fix link
2025-02-11 11:23:19 -07:00
Josh Hawkins
a3ede3cf8a
Snap points to edges and create object mask from bounding box (#16488) 2025-02-11 09:08:28 -07:00
Nicolas Mowen
b594f198a9
Consolidate HailoRT into the main Docker Image (#16487)
* Simplify main build to include hailo

* Update docs

* Remove hailo docker build
2025-02-11 09:08:13 -07:00
Josh Hawkins
4ef6214029
Tracking improvements (#16484)
* norfair tracker config per object type

* change default R back to 3.4

* separate trackers for static and autotracking cameras

* tweak params and fix debug draw

* ensure all trackers are correctly updated even when there are no detections

* basic reid with histograms

* check mp value

* check mp value again

* stationary objects won't have embeddings

* don't switch trackers when autotracking is toggled after startup

* improve motion detection during autotracking

* use helper function

* get histogram in tracker instead of detect
2025-02-11 09:37:58 -06:00
Josh Hawkins
82f8694464
Toggle review alerts and detections (#16482)
* backend

* frontend

* docs

* fix topic name and initial websocket state

* update reference config

* fix mqtt docs

* fix initial topics

* don't apply max severity when alerts/detections are disabled

* fix ws merge

* tweaks
2025-02-11 07:46:25 -07:00
Josh Hawkins
c54259ecc6
use persistence for hls player muting (#16481) 2025-02-11 06:56:15 -07:00
Josh Hawkins
7e48b3514c
remove extraneous print from recordings summary code (#16468) 2025-02-11 05:19:54 -07:00
Josh Hawkins
f0270c6e34
fix non-awaited onvif calls (#16469) 2025-02-11 05:19:20 -07:00
Nicolas Mowen
ac3dfbc30d
Set stop event first (#16466) 2025-02-10 21:22:33 -06:00
Josh Hawkins
9a0211a71c
Improve Notifications (#16453)
* backend

* frontend

* add notification config at camera level

* camera level notifications in dispatcher

* initial onconnect

* frontend

* backend for suspended notifications

* frontend

* use base communicator

* initialize all cameras in suspended array and use 0 for unsuspended

* remove switch and use select for suspending in frontend

* use timestamp instead of datetime

* frontend tweaks

* mqtt docs

* fix button width

* use grid for layout

* use thread and queue for processing notifications with 10s timeout

* clean up

* move async code to main class

* tweaks

* docs

* remove warning message
2025-02-10 19:47:15 -07:00
Nicolas Mowen
198d067e25
Implement support for YOLOv9 via ONNX (#16459)
* WIP yolov9

* Implement post processing for yolov9

* Cleanup detection

* Update docs to make note of supported yolov9

* Move post processing to separate utility

* Add note about other models
2025-02-10 15:00:12 -06:00
Josh Hawkins
72209986b6
Estimated object speed for zones (#16452)
* utility functions

* backend config

* backend object speed tracking

* draw speed on debug view

* basic frontend zone editor

* remove line sorting

* fix types

* highlight line on canvas when entering value in zone edit pane

* rename vars and add validation

* ensure speed estimation is disabled when user adds more than 4 points

* pixel velocity in debug

* unit_system in config

* ability to define unit system in config

* save max speed to db

* frontend

* docs

* clarify docs

* utility functions

* backend config

* backend object speed tracking

* draw speed on debug view

* basic frontend zone editor

* remove line sorting

* fix types

* highlight line on canvas when entering value in zone edit pane

* rename vars and add validation

* ensure speed estimation is disabled when user adds more than 4 points

* pixel velocity in debug

* unit_system in config

* ability to define unit system in config

* save max speed to db

* frontend

* docs

* clarify docs

* fix duplicates from merge

* include max_estimated_speed in api responses

* add units to zone edit pane

* catch undefined

* add average speed

* clarify docs

* only track average speed when object is active

* rename vars

* ensure points and distances are ordered clockwise

* only store the last 10 speeds like score history

* remove max estimated speed

* update docs

* update docs

* fix point ordering

* improve readability

* docs inertia recommendation

* fix point ordering

* check object frame time

* add velocity angle to frontend

* docs clarity

* add frontend speed filter

* fix mqtt docs

* fix mqtt docs

* don't try to remove distances if they weren't already defined

* don't display estimates on debug view/snapshots if object is not in a speed tracking zone

* docs

* implement speed_threshold for zone presence

* docs for threshold

* better ground plane image

* improve image zone size

* add inertia to speed threshold example
2025-02-10 13:23:42 -07:00
Josh Hawkins
dd7820e4ee
Improve live streaming (#16447)
* config file changes

* config migrator

* stream selection on single camera live view

* camera streaming settings dialog

* manage persistent group streaming settings

* apply streaming settings in camera groups

* add ability to clear all streaming settings from settings

* docs

* update reference config

* fixes

* clarify docs

* use first stream as default in dialog

* ensure still image is visible after switching stream type to none

* docs

* clarify docs

* add ability to continue playing stream in background

* fix props

* put stream selection inside dropdown on desktop

* add capabilities to live mode hook

* live context menu component

* resize observer: only return new dimensions if they've actually changed

* pass volume prop to players

* fix slider bug, https://github.com/shadcn-ui/ui/issues/1448

* update react-grid-layout

* prevent animated transitions on draggable grid layout

* add context menu to dashboards

* use provider

* streaming dialog from context menu

* docs

* add jsmpeg warning to context menu

* audio and two way talk indicators in single camera view

* add link to debug view

* don't use hook

* create manual events from live camera view

* maintain grow classes on grid items

* fix initial volume state on default dashboard

* fix pointer events causing context menu to end up underneath image on iOS

* mobile drawer tweaks

* stream stats

* show settings menu for non-restreamed cameras

* consistent settings icon

* tweaks

* optional stats to fix birdseye player

* add toaster to live camera view

* fix crash on initial save in streaming dialog

* don't require restreaming for context menu streaming settings

* add debug view to context menu

* stats fixes

* update docs

* always show stream info when restreamed

* update camera streaming dialog

* make note of no h265 support for webrtc

* docs clarity

* ensure docs show streams as a dict

* docs clarity

* fix css file

* tweaks
2025-02-10 09:42:35 -07:00
Josh Hawkins
2a28964e63
Improve UI logs (#16434)
* use react-logviewer and backend streaming

* layout adjustments

* readd copy handler

* reorder and fix key

* add loading state

* handle frigate log consolidation

* handle newlines in sheet

* update react-logviewer

* fix scrolling and use chunked log download

* don't combine frigate log lines with timestamp

* basic deduplication

* use react-logviewer and backend streaming

* layout adjustments

* readd copy handler

* reorder and fix key

* add loading state

* handle frigate log consolidation

* handle newlines in sheet

* update react-logviewer

* fix scrolling and use chunked log download

* don't combine frigate log lines with timestamp

* basic deduplication

* move process logs function to services util

* improve layout and scrolling behavior

* clean up
2025-02-10 08:38:56 -07:00
Josh Hawkins
e207b2f50b
Refactor export filenames to include start and end date/time (#16446) 2025-02-10 08:30:23 -07:00
Josh Hawkins
d5b60237a2
Improve display of recordings data (#16436)
* backend

* frontend

* add earliest recording available to storage metrics page
2025-02-09 16:02:36 -07:00
Josh Hawkins
bc96db8612
Add ability to set mqtt qos in config (#16435) 2025-02-09 16:45:04 -06:00
Nicolas Mowen
81bd956ae8
Fix sanitized api causing issues (#16433) 2025-02-09 14:50:12 -07:00
Josh Hawkins
c8cec63cb9
Object area debugging and improvements (#16432)
* add ability to specify min and max area as percentages

* debug draw area and ratio

* docs

* update for best percentage
2025-02-09 14:48:23 -07:00
Josh Hawkins
83beacf84a
Add autotracking calibration message (#16431)
* check autotracking calibration values before writing to config

* docs

* clarify log message
2025-02-09 14:29:08 -07:00
Josh Hawkins
cc2dbdcb44
Timeline improvements (#16429)
* virtualize event segments

* use virtual segments in event review timeline

* add segmentkey to props

* virtualize motion segments

* use virtual segments in motion review timeline

* update draggable element hook to use only math

* timeline zooming hook

* add zooming to event review timeline

* update playground

* zoomable timeline on recording view

* consolidate divs in summary timeline

* only calculate motion data for visible motion segments

* use swr loading state

* fix motion only

* keep handlebar centered when zooming

* zoom animations

* clean up

* ensure motion only checks both halves of segment

* prevent handlebar jump when using motion only mode
2025-02-09 14:13:32 -07:00
towerhand
1f89844c67
Use /api/metrics instead of /metrics (#16425) 2025-02-09 12:50:42 -07:00
Nicolas Mowen
81a56549da
Face recognition UI improvements (#16422)
* Rework face recognition APIs

* Fix error message on cancel

* Add ability to create new face library
2025-02-09 13:22:25 -06:00
Nicolas Mowen
c58d2add37
Fix missing prometheus commit (#16415)
* Add prometheus metrics

* add docs for metrics

* sidebar

* lint

* lint

---------

Co-authored-by: Mitch Ross <mitchross@users.noreply.github.com>
2025-02-09 10:04:39 -07:00
Nicolas Mowen
a42ad7ead9
UI fixes (#16406)
* Fix new review item banner blocking third chip

* Fix custom export mode
2025-02-09 07:36:55 -06:00
Nicolas Mowen
973d3aed9a
Disable jetson builds (#16396) 2025-02-08 17:20:58 -06:00
Nicolas Mowen
fa300742ea
Fix build (#16393)
* Fix rpi build

* Attempt to fix jetson builds
2025-02-08 14:51:42 -06:00
Nicolas Mowen
15472274ee Update docs sidebar name (#16370)
* Clarify classification

* Fix face hierarchy as well
2025-02-08 12:47:01 -06:00
Nicolas Mowen
f3485bfc13 Sanitize provided name 2025-02-08 12:47:01 -06:00
Nicolas Mowen
060ad34e1d Update cudnn and onnxruntime (#16332) 2025-02-08 12:47:01 -06:00
Josh Hawkins
ebf4403eca Add endpoint for fetching batch review items (#16254) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
fb316874ef Quick fix for face rec (#16226)
* Check both

* Fix api order
2025-02-08 12:47:01 -06:00
Nicolas Mowen
9236898a9d Sub label sensors (#16218)
* Support mqtt sensors for logo attributes

* Expose in api
2025-02-08 12:47:01 -06:00
Nicolas Mowen
1c3527f5c4 Face recognition reprocess (#16212)
* Implement update topic

* Add API for reprocessing face

* Get reprocess working

* Fix crash when no faces exist

* Simplify
2025-02-08 12:47:01 -06:00
Nicolas Mowen
6f4002a56f Add training face library information to docs (#16169) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
3f99ff65ed Face recognition improvements (#16034) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
c7c8575c9b Bird classification (#15966)
* Start working on bird processor

* Initial setup for bird processing

* Improvements to handling

* Get classification working

* Cleanup classification

* Add classification config

* Update sort
2025-02-08 12:47:01 -06:00
Nicolas Mowen
63dbcd79e2 Update hailo deps (#15958) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
9dc85d4a76 Processing refactor (#15935)
* Refactor post processor to be real time processor

* Build out generic API for post processing

* Cleanup

* Fix
2025-02-08 12:47:01 -06:00
Nicolas Mowen
88686c44fe Generalize postprocessing (#15931)
* Actually send result to face registration

* Define postprocessing api and move face processing to fit

* Standardize request handling

* Standardize handling of processors

* Rename processing metrics

* Cleanup

* Standardize object end

* Update to newer formatting

* One more

* One more
2025-02-08 12:47:01 -06:00
Nicolas Mowen
3f1d85e189 Fix onvif packages (#15906)
* Don't replace packages

* Formatting
2025-02-08 12:47:01 -06:00
Josh Hawkins
283f1b19a7 Only print line and key/value when a line number can be found (#15897) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
ab8f9e5412 Upgrade onvif-zeep dependency to use onvif-zeep-async (#15894)
* Upgrade to new dependency

* Start onvif work

* Update for async calls
2025-02-08 12:47:01 -06:00
Nicolas Mowen
4f85b18b08 Improvements to face recognition (#15854)
* Do not add margin to face images

* remove margin

* Correctly clear
2025-02-08 12:47:01 -06:00
Nicolas Mowen
a6ae208fe7 Add metrics page for embeddings and face / license plate processing times (#15818)
* Get stats for embeddings inferences

* cleanup embeddings inferences

* Enable UI for feature metrics

* Change threshold

* Fix check

* Update python for actions

* Set python version

* Ignore type for now
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0c13227f7d Fix facedet download (#15811)
* Support downloading face models

* Handle download and loading correctly

* Add face dir creation

* Fix error

* Fix

* Formatting

* Move upload to button

* Show number of faces in library for each name

* Add text color for score

* Cleanup
2025-02-08 12:47:01 -06:00
Nicolas Mowen
1edbd2d498 Refactor camera activity processing (#15803)
* Replace object label sensors with new manager

* Implement zone topics

* remove unused
2025-02-08 12:47:01 -06:00
Marc Altmann
4c7d4e6c0a rockchip: update dependencies and add script for model conversion (#15699)
* rockchip: update dependencies and add script for model conversion

* rockchip: update docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-08 12:47:01 -06:00
Nicolas Mowen
458ca4a983 Add support for SR-IOV GPU stats (#15796)
* Add option to treat GPU as SRIOV in order for stats to work correctly

* Add to intel docs

* fix tests
2025-02-08 12:47:01 -06:00
Nicolas Mowen
6a83f40135 Add ffmpeg config to increase HEVC compatibility with Apple devices (#15795)
* Add config option for handling HEVC playback on Apple devices

* Update docs

* Remove unused
2025-02-08 12:47:01 -06:00
Nicolas Mowen
281407247b Implement face recognition training in UI (#15786)
* Rename debug to train

* Add api to train image as person

* Cleanup model running

* Formatting

* Fix

* Set face recognition page title
2025-02-08 12:47:01 -06:00
Nicolas Mowen
172e7d494f Add UI for managing face recognitions (#15757)
* Add ability to view attempts

* Improve UI

* Cleanup

* Correctly refresh ui when item is deleted

* Select correct library by default

* Add min score

* Cleanup
2025-02-08 12:47:01 -06:00
Nicolas Mowen
8763390dfe Face recognition logic improvements (#15679)
* Always initialize face model on startup

* Add ability to save face images for debugging

* Implement better face recognition reasonability
2025-02-08 12:47:01 -06:00
Nicolas Mowen
c26144da75 Change folder 2025-02-08 12:47:01 -06:00
Nicolas Mowen
d025495374 Set model size 2025-02-08 12:47:01 -06:00
Nicolas Mowen
f58fc4c367 Improve face recognition (#15670)
* Face recognition tuning

* Support face alignment

* Cleanup

* Correctly download model
2025-02-08 12:47:01 -06:00
Nicolas Mowen
cc6a740a0f Update TRT (#15646) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
909444dacf Make face library scrollable 2025-02-08 12:47:01 -06:00
Nicolas Mowen
c28a0ed9a3 Update openvino (#15634) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
cd0d37ce07 Update python deps (#15618)
* Update opencv

* Update cython

* Update scikit

* Update scipy
2025-02-08 12:47:01 -06:00
Nicolas Mowen
d0ad840ef4 Enable temporary caching of camera images to improve responsiveness of UI (#15614) 2025-02-08 12:47:01 -06:00
Josh Hawkins
edab4efa42 Preserve line numbers in config validation (#15584)
* use ruamel to parse and preserve line numbers for config validation

* maintain exception for non validation errors

* fix types

* include input in log messages
2025-02-08 12:47:01 -06:00
Nicolas Mowen
877b7b2910 Update base image (#15103)
* Change base image

* Update python

* Update coral library

* Fix source file

* Install correct apt packages

* Cleanup

* Fix installation of coral deps

* fix python installations

* Fix devcontainer build

* Get tensorrt build working

* Update other deps

* Filter out tflite log

* Get ROCm build working

* Get rockchip build working

* Get hailo build working

* Add note to comment
2025-02-08 12:47:01 -06:00
Nicolas Mowen
66675cf977 Face recognition fixes (#15222)
* Fix nginx max upload size

* Close upload dialog when done and add toasts

* Formatting

* fix ruff
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0e4ff91d6b Improve face recognition (#15205)
* Validate faces using cosine distance and SVC

* Formatting

* Use opencv instead of face embedding

* Update docs for training data

* Adjust to score system

* Set bounds

* remove face embeddings

* Update writing images

* Add face library page

* Add ability to select file

* Install opencv deps

* Cleanup

* Use different deps

* Move deps

* Cleanup

* Only show face library for desktop

* Implement deleting

* Add ability to upload image

* Add support for uploading images
2025-02-08 12:47:01 -06:00
Nicolas Mowen
dd7b1be7f4 Remove standardization 2025-02-08 12:47:01 -06:00
Nicolas Mowen
102a7695a3 Fix check 2025-02-08 12:47:01 -06:00
Nicolas Mowen
755c9eea1c Remove hardcoded face name 2025-02-08 12:47:01 -06:00
Nicolas Mowen
e5fcc50ae2 Use SVC to normalize and classify faces for recognition (#14835)
* Add margin to detected faces for embeddings

* Standardize pixel values for face input

* Use SVC to classify faces

* Clear classifier when new face is added

* Formatting

* Add dependency
2025-02-08 12:47:01 -06:00
Josh Hawkins
8bb037f82e Use regular expressions for plate matching (#14727) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
a0c35101fb Update facenet model (#14647) 2025-02-08 12:47:01 -06:00
Josh Hawkins
af1eaac5ff LPR improvements (#14641) 2025-02-08 12:47:01 -06:00
Josh Hawkins
dbbfc735f0 Prevent division by zero in lpr confidence checks (#14615) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
711575736d Fix label check (#14610)
* Create config for parsing object

* Use in maintainer
2025-02-08 12:47:01 -06:00
Josh Hawkins
c4ce7f9800 License plate recognition (ALPR) backend (#14564)
* Update version

* Face recognition backend (#14495)

* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model

* Improve face recognition (#14537)

* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting

* Fix access (#14540)

* Face detection (#14544)

* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo

* Update version

* Face recognition backend (#14495)

* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model

* Improve face recognition (#14537)

* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting

* Fix access (#14540)

* Face detection (#14544)

* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo

* initial foundation for alpr with paddleocr

* initial foundation for alpr with paddleocr

* initial foundation for alpr with paddleocr

* config

* config

* lpr maintainer

* clean up

* clean up

* fix processing

* don't process for stationary cars

* fix order

* fixes

* check for known plates

* improved length and character by character confidence

* model fixes and small tweaks

* docs

* placeholder for non frigate+ model lp detection

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-08 12:47:01 -06:00
Nicolas Mowen
594a4e0ba3 Face detection (#14544)
* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo
2025-02-08 12:47:01 -06:00
Nicolas Mowen
c1d5510428 Fix access (#14540) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
a3d6266d96 Improve face recognition (#14537)
* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting
2025-02-08 12:47:01 -06:00
Nicolas Mowen
aa19ec3ddb Face recognition backend (#14495)
* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0e1139a7a4 Update version 2025-02-08 12:47:01 -06:00
Blake Blackshear
cc955b1e66 Merge remote-tracking branch 'origin/master' into dev 2025-02-08 10:42:48 -06:00
Josh Hawkins
da34ff964f
Remove development wording (#16378) 2025-02-08 10:27:50 -06:00
Nicolas Mowen
d6a2965cb2
Update openvino hardware inference times (#16368) 2025-02-07 10:52:21 -06:00
Nicolas Mowen
4b429e440b
Point to latest version of hailo script (#16351) 2025-02-06 10:31:24 -06:00
Josh Hawkins
8759b4a0d3
Clarify occupancy sensor usage in HA integration (#16333) 2025-02-05 09:56:16 -06:00
Rui Alves
df840b7cd5
Finish unit tests for review controller and started for event controller (#15955)
* Started unit tests for the review controller

* Revert "Started unit tests for the review controller"

This reverts commit 7746eb146f.

* Started unit tests for GET /review/activity/motion Endpoint

* Started unit tests for GET /review/event/{event_id} Endpoint

* Continued unit tests for GET /review/event/{event_id} Endpoint

* Continued unit tests for GET /review/{event_id} Endpoint

* Continued unit tests for GET /review/{review_id} Endpoint

* Added unit tests for GET /review/{review_id}/viewed Endpoint

* Added unit tests for GET /stats Endpoint

* Added unit tests for GET /events Endpoint

* Updated unit tests for GET /events Endpoint

* Deleted unit tests for /events from test_http (updated tests are now in test_http_event.py)

* Removed duplicated test for GET /review/activity/motion Endpoint
2025-02-04 06:28:14 -07:00
Nicolas Mowen
0645dc70a5
Detector docs (#16292)
* Refactor hardware docs to show model specific speeds

* Move hailo to first party detectors

* Make note of multiple detectors

* Improve hierarchy

* Update object_detectors.md

* Update hardware.md
2025-02-03 07:57:21 -06:00
Josh Hawkins
b230b35c62
Fix genai note (#16273) 2025-02-02 07:10:37 -07:00
Josh Hawkins
31da9351f0
Clarify genai provider and openai compatible endpoints (#16267) 2025-02-01 16:49:09 -07:00
Ben Clouser
93d39370b6
update docs to be more clear regarding audio support and go2rtc requi… (#16232)
* update docs to be more clear regarding audio support and go2rtc requirement

Signed-off-by: Ben Clouser <dev@benclouser.com>

* Update docs/docs/troubleshooting/faqs.md

* Update docs/docs/troubleshooting/faqs.md

* Update docs/docs/troubleshooting/faqs.md

* Clarify title

* Cleanup

---------

Signed-off-by: Ben Clouser <dev@benclouser.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-01-30 11:23:38 -07:00
Nicolas Mowen
9dc4e8f290
Add frigate notify to third party extensions (#16190) 2025-01-28 08:09:59 -06:00
glossyio
12e62488c6
corrected docs for /config/save to /api/config/save (#16077) 2025-01-21 16:52:42 -07:00
Blake Blackshear
b5e5127d48
update link (#15756) 2024-12-31 12:05:55 -06:00
PrplHaz4
24f4aa79c8
Change Amcrest example to subtype=3 (#15607)
I think this was meant to be a `3`
2024-12-19 21:47:11 -06:00
Nicolas Mowen
dfc94b5ad6
Add dahua and amcrest to camera specific documentation (#15605) 2024-12-19 17:24:34 -06:00
Nicolas Mowen
5acbe37e6f
Update camera specific settings to make note of hikvision authentication (#15552) 2024-12-17 11:31:59 -06:00
Blake Blackshear
2461d01329
Update hardware recs (#15254) 2024-11-29 07:20:33 -06:00
Nicolas Mowen
5cafca1be0
Add docs for go2rtc logging (#15204) 2024-11-26 09:34:40 -06:00
victpork
9c5a04f25f
Added code to download weights from new host (#15087) 2024-11-20 05:06:22 -06:00
Charles Crossan
1ffdd32013
Update authentication.md (#14980)
add detail to reset_admin_password setting
2024-11-14 08:13:37 -07:00
Nicolas Mowen
99506845f7
Update edge tpu docs for RPi 5 kernel (#14946) 2024-11-12 15:48:57 -06:00
Blake Blackshear
ffd05f90f3
update hardware recommendations (#14830) 2024-11-06 05:02:42 -07:00
Blake Blackshear
3a8c290f91
update docs for new labels (#14739) 2024-11-03 06:10:38 -06:00
255 changed files with 13758 additions and 2580 deletions

View File

@ -2,6 +2,7 @@ aarch
absdiff
airockchip
Alloc
alpr
Amcrest
amdgpu
analyzeduration
@ -61,6 +62,7 @@ dsize
dtype
ECONNRESET
edgetpu
facenet
fastapi
faststart
fflags
@ -114,6 +116,8 @@ itemsize
Jellyfin
jetson
jetsons
jina
jinaai
joserfc
jsmpeg
jsonify
@ -187,6 +191,7 @@ openai
opencv
openvino
OWASP
paddleocr
paho
passwordless
popleft
@ -308,4 +313,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency

View File

@ -77,6 +77,7 @@ jobs:
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
jetson_jp4_build:
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 4
steps:
@ -106,6 +107,7 @@ jobs:
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4,mode=max
jetson_jp5_build:
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 5
steps:
@ -162,6 +164,19 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rocm
files: docker/rocm/rocm.hcl
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-from=type=gha
arm64_extra_builds:
runs-on: ubuntu-22.04
name: ARM Extra Build
@ -187,46 +202,6 @@ jobs:
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
combined_extra_builds:
runs-on: ubuntu-22.04
name: Combined Extra Builds
needs:
- amd64_build
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Hailo-8l build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: h8l
files: docker/hailo8l/h8l.hcl
set: |
h8l.tags=${{ steps.setup.outputs.image-name }}-h8l
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rocm
files: docker/rocm/rocm.hcl
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:

View File

@ -6,7 +6,7 @@ on:
- "docs/**"
env:
DEFAULT_PYTHON: 3.9
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:

View File

@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.15.0
VERSION = 0.16.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty

View File

@ -1,40 +0,0 @@
# syntax=docker/dockerfile:1.6
ARG DEBIAN_FRONTEND=noninteractive
# Build Python wheels
FROM wheels AS h8l-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/hailo8l/requirements-wheels-h8l.txt /requirements-wheels-h8l.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
# Create a directory to store the built wheels
RUN mkdir /h8l-wheels
# Build the wheels
RUN pip3 wheel --wheel-dir=/h8l-wheels -c /requirements-wheels.txt -r /requirements-wheels-h8l.txt
FROM wget AS hailort
ARG TARGETARCH
RUN --mount=type=bind,source=docker/hailo8l/install_hailort.sh,target=/deps/install_hailort.sh \
/deps/install_hailort.sh
# Use deps as the base image
FROM deps AS h8l-frigate
# Copy the wheels from the wheels stage
COPY --from=h8l-wheels /h8l-wheels /deps/h8l-wheels
COPY --from=hailort /hailo-wheels /deps/hailo-wheels
COPY --from=hailort /rootfs/ /
# Install the wheels
RUN pip3 install -U /deps/h8l-wheels/*.whl
RUN pip3 install -U /deps/hailo-wheels/*.whl
# Copy base files from the rootfs stage
COPY --from=rootfs / /
# Set workdir
WORKDIR /opt/frigate/

View File

@ -1,34 +0,0 @@
target wget {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "wget"
}
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "rootfs"
}
target h8l {
dockerfile = "docker/hailo8l/Dockerfile"
contexts = {
wget = "target:wget"
wheels = "target:wheels"
deps = "target:deps"
rootfs = "target:rootfs"
}
platforms = ["linux/arm64","linux/amd64"]
}

View File

@ -1,15 +0,0 @@
BOARDS += h8l
local-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=frigate:latest-h8l \
--load
build-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l
push-h8l: build-h8l
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l \
--push

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -euxo pipefail
hailo_version="4.19.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="x86_64"
elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" |
tar -C / -xzf -
mkdir -p /hailo-wheels
wget -P /hailo-wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp39-cp39-linux_${arch}.whl"

View File

@ -1,12 +0,0 @@
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*

View File

@ -4,6 +4,7 @@
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget
hailo_version="4.20.0"
arch=$(uname -m)
if [[ $arch == "x86_64" ]]; then
@ -13,7 +14,7 @@ else
fi
# Clone the HailoRT driver repository
git clone --depth 1 --branch v4.19.0 https://github.com/hailo-ai/hailort-drivers.git
git clone --depth 1 --branch v${hailo_version} https://github.com/hailo-ai/hailort-drivers.git
# Build and install the HailoRT driver
cd hailort-drivers/linux/pcie

View File

@ -3,12 +3,12 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG BASE_IMAGE=debian:11
ARG SLIM_BASE=debian:11-slim
ARG BASE_IMAGE=debian:12
ARG SLIM_BASE=debian:12-slim
FROM ${BASE_IMAGE} AS base
FROM --platform=${BUILDPLATFORM} debian:11 AS base_host
FROM --platform=${BUILDPLATFORM} debian:12 AS base_host
FROM ${SLIM_BASE} AS slim-base
@ -66,8 +66,8 @@ COPY docker/main/requirements-ov.txt /requirements-ov.txt
RUN apt-get -qq update \
&& apt-get -qq install -y wget python3 python3-dev python3-distutils gcc pkg-config libhdf5-dev \
&& wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip" \
&& pip install -r /requirements-ov.txt
&& python3 get-pip.py "pip" --break-system-packages \
&& pip install --break-system-packages -r /requirements-ov.txt
# Get OpenVino Model
RUN --mount=type=bind,source=docker/main/build_ov_model.py,target=/build_ov_model.py \
@ -139,24 +139,17 @@ ARG TARGETARCH
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https \
gnupg \
wget \
# the key fingerprint can be obtained from https://ftp-master.debian.org/keys.html
&& wget -qO- "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xA4285295FC7B1A81600062A9605C66F00D6C9793" | \
gpg --dearmor > /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/debian-archive-bullseye-stable.gpg] http://deb.debian.org/debian bullseye main contrib non-free" | \
tee /etc/apt/sources.list.d/debian-bullseye-nonfree.list \
apt-transport-https wget \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.9 \
python3.9-dev \
python3 \
python3-dev \
# opencv dependencies
build-essential cmake git pkg-config libgtk-3-dev \
libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \
libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \
gfortran openexr libatlas-base-dev libssl-dev\
libtbb2 libtbb-dev libdc1394-22-dev libopenexr-dev \
libtbbmalloc2 libtbb-dev libdc1394-dev libopenexr-dev \
libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
# sqlite3 dependencies
tclsh \
@ -164,14 +157,11 @@ RUN apt-get -qq update \
gcc gfortran libopenblas-dev liblapack-dev && \
rm -rf /var/lib/apt/lists/*
# Ensure python3 defaults to python3.9
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
&& python3 get-pip.py "pip" --break-system-packages
COPY docker/main/requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt
RUN pip3 install -r /requirements.txt --break-system-packages
# Build pysqlite3 from source
COPY docker/main/build_pysqlite3.sh /build_pysqlite3.sh
@ -180,6 +170,9 @@ RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
# Install HailoRT & Wheels
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
/deps/install_hailort.sh
# Collect deps in a single layer
FROM scratch AS deps-rootfs
@ -190,6 +183,7 @@ COPY --from=libusb-build /usr/local/lib /usr/local/lib
COPY --from=tempio /rootfs/ /
COPY --from=s6-overlay /rootfs/ /
COPY --from=models /rootfs/ /
COPY --from=wheels /rootfs/ /
COPY docker/main/rootfs/ /
@ -221,8 +215,8 @@ RUN --mount=type=bind,source=docker/main/install_deps.sh,target=/deps/install_de
/deps/install_deps.sh
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
python3 -m pip install --upgrade pip && \
pip3 install -U /deps/wheels/*.whl
python3 -m pip install --upgrade pip --break-system-packages && \
pip3 install -U /deps/wheels/*.whl --break-system-packages
COPY --from=deps-rootfs / /
@ -269,7 +263,7 @@ RUN apt-get update \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=./docker/main/requirements-dev.txt,target=/workspace/frigate/requirements-dev.txt \
pip3 install -r requirements-dev.txt
pip3 install -r requirements-dev.txt --break-system-packages
HEALTHCHECK NONE

View File

@ -8,10 +8,16 @@ SECURE_TOKEN_MODULE_VERSION="1.5"
SET_MISC_MODULE_VERSION="v0.33"
NGX_DEVEL_KIT_VERSION="v0.3.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
apt-get update
source /etc/os-release
if [[ "$VERSION_ID" == "12" ]]; then
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
else
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
fi
apt-get update
apt-get -yqq build-dep nginx
apt-get -yqq install --no-install-recommends ca-certificates wget

View File

@ -4,7 +4,7 @@ from openvino.tools import mo
ov_model = mo.convert_model(
"/models/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb",
compress_to_fp16=True,
transformations_config="/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
transformations_config="/usr/local/lib/python3.11/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
tensorflow_object_detection_api_pipeline_config="/models/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config",
reverse_input_channels=True,
)

View File

@ -4,8 +4,15 @@ set -euxo pipefail
SQLITE_VEC_VERSION="0.1.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
source /etc/os-release
if [[ "$VERSION_ID" == "12" ]]; then
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
else
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
fi
apt-get update
apt-get -yqq build-dep sqlite3 gettext git

View File

@ -11,33 +11,34 @@ apt-get -qq install --no-install-recommends -y \
lbzip2 \
procps vainfo \
unzip locales tzdata libxml2 xz-utils \
python3.9 \
python3 \
python3-pip \
curl \
lsof \
jq \
nethogs
# ensure python3 defaults to python3.9
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
nethogs \
libgl1 \
libglib2.0-0 \
libusb-1.0.0
mkdir -p -m 600 /root/.gnupg
# add coral repo
curl -fsSLo - https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
gpg --dearmor -o /etc/apt/trusted.gpg.d/google-cloud-packages-archive-keyring.gpg
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
echo "libedgetpu1-max libedgetpu/accepted-eula select true" | debconf-set-selections
# install coral runtime
wget -q -O /tmp/libedgetpu1-max.deb "https://github.com/feranick/libedgetpu/releases/download/16.0TF2.17.0-1/libedgetpu1-max_16.0tf2.17.0-1.bookworm_${TARGETARCH}.deb"
unset DEBIAN_FRONTEND
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
rm /tmp/libedgetpu1-max.deb
# enable non-free repo in Debian
if grep -q "Debian" /etc/issue; then
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
# install python3 & tflite runtime
if [[ "${TARGETARCH}" == "amd64" ]]; then
pip3 install --break-system-packages https://github.com/feranick/TFlite-builds/releases/download/v2.17.0/tflite_runtime-2.17.0-cp311-cp311-linux_x86_64.whl
pip3 install --break-system-packages https://github.com/feranick/pycoral/releases/download/2.0.2TF2.17.0/pycoral-2.0.2-cp311-cp311-linux_x86_64.whl
fi
# coral drivers
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
libedgetpu1-max python3-tflite-runtime python3-pycoral
if [[ "${TARGETARCH}" == "arm64" ]]; then
pip3 install --break-system-packages https://github.com/feranick/TFlite-builds/releases/download/v2.17.0/tflite_runtime-2.17.0-cp311-cp311-linux_aarch64.whl
pip3 install --break-system-packages https://github.com/feranick/pycoral/releases/download/2.0.2TF2.17.0/pycoral-2.0.2-cp311-cp311-linux_aarch64.whl
fi
# btbn-ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
@ -65,23 +66,15 @@ fi
# arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then
# use debian bookworm for amd / intel-i965 driver packages
echo 'deb https://deb.debian.org/debian bookworm main contrib non-free' >/etc/apt/sources.list.d/debian-bookworm.list
apt-get -qq update
# install amd / intel-i965 driver packages
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver intel-gpu-tools onevpl-tools \
libva-drm2 \
mesa-va-drivers radeontop
# something about this dependency requires it to be installed in a separate call rather than in the line above
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver-shaders
# intel packages use zst compression so we need to update dpkg
apt-get install -y dpkg
rm -f /etc/apt/sources.list.d/debian-bookworm.list
# use intel apt intel packages
wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list

14
docker/main/install_hailort.sh Executable file
View File

@ -0,0 +1,14 @@
#!/bin/bash
set -euxo pipefail
hailo_version="4.20.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="x86_64"
elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" | tar -C / -xzf -
wget -P /wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp311-cp311-linux_${arch}.whl"

View File

@ -1,3 +1,4 @@
aiofiles == 24.1.*
click == 8.1.*
# FastAPI
aiohttp == 3.11.2
@ -10,10 +11,10 @@ imutils == 0.5.*
joserfc == 1.0.*
pathvalidate == 3.2.*
markupsafe == 2.1.*
python-multipart == 0.0.12
# General
mypy == 1.6.1
numpy == 1.26.*
onvif_zeep == 0.2.12
opencv-python-headless == 4.9.0.*
onvif-zeep-async == 3.1.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
@ -27,15 +28,19 @@ ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
scipy == 1.13.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
# Image Manipulation
numpy == 1.26.*
opencv-python-headless == 4.10.0.*
opencv-contrib-python == 4.9.0.*
scipy == 1.14.*
# OpenVino & ONNX
openvino == 2024.3.*
onnxruntime-openvino == 1.19.* ; platform_machine == 'x86_64'
onnxruntime == 1.19.* ; platform_machine == 'aarch64'
openvino == 2024.4.*
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
# Embeddings
transformers == 4.45.*
# Generative AI
@ -45,3 +50,21 @@ openai == 1.51.*
# push notifications
py-vapid == 1.9.*
pywebpush == 2.0.*
# alpr
pyclipper == 1.3.*
shapely == 2.0.*
Levenshtein==0.26.*
prometheus-client == 0.21.*
# HailoRT Wheels
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*

View File

@ -1,2 +1,2 @@
scikit-build == 0.17.*
scikit-build == 0.18.*
nvidia-pyindex

View File

@ -81,6 +81,9 @@ http {
open_file_cache_errors on;
aio on;
# file upload size
client_max_body_size 10M;
# https://github.com/kaltura/nginx-vod-module#vod_open_file_thread_pool
vod_open_file_thread_pool default;
@ -106,6 +109,14 @@ http {
expires off;
keepalive_disable safari;
# vod module returns 502 for non-existent media
# https://github.com/kaltura/nginx-vod-module/issues/468
error_page 502 =404 /vod-not-found;
}
location = /vod-not-found {
return 404;
}
location /stream/ {

View File

@ -0,0 +1,20 @@
./subset/000000005001.jpg
./subset/000000038829.jpg
./subset/000000052891.jpg
./subset/000000075612.jpg
./subset/000000098261.jpg
./subset/000000181542.jpg
./subset/000000215245.jpg
./subset/000000277005.jpg
./subset/000000288685.jpg
./subset/000000301421.jpg
./subset/000000334371.jpg
./subset/000000348481.jpg
./subset/000000373353.jpg
./subset/000000397681.jpg
./subset/000000414673.jpg
./subset/000000419312.jpg
./subset/000000465822.jpg
./subset/000000475732.jpg
./subset/000000559707.jpg
./subset/000000574315.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 272 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@ -7,18 +7,23 @@ FROM wheels as rk-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
RUN python3 -m pip config set global.break-system-packages true
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN rm -rf /rk-wheels/opencv_python-*
FROM deps AS rk-frigate
ARG TARGETARCH
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
pip3 install -U /deps/rk-wheels/*.whl
pip3 install --no-deps -U /deps/rk-wheels/*.whl --break-system-packages
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY docker/rockchip/COCO /COCO
COPY docker/rockchip/conv2rknn.py /opt/conv2rknn.py
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt.so /usr/lib/
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/librknnrt.so /usr/lib/
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe

View File

@ -0,0 +1,82 @@
import os
import rknn
import yaml
from rknn.api import RKNN
try:
with open(rknn.__path__[0] + "/VERSION") as file:
tk_version = file.read().strip()
except FileNotFoundError:
pass
try:
with open("/config/conv2rknn.yaml", "r") as config_file:
configuration = yaml.safe_load(config_file)
except FileNotFoundError:
raise Exception("Please place a config.yaml file in /config/conv2rknn.yaml")
if configuration["config"] != None:
rknn_config = configuration["config"]
else:
rknn_config = {}
if not os.path.isdir("/config/model_cache/rknn_cache/onnx"):
raise Exception(
"Place the onnx models you want to convert to rknn format in /config/model_cache/rknn_cache/onnx"
)
if "soc" not in configuration:
try:
with open("/proc/device-tree/compatible") as file:
soc = file.read().split(",")[-1].strip("\x00")
except FileNotFoundError:
raise Exception("Make sure to run docker in privileged mode.")
configuration["soc"] = [
soc,
]
if "quantization" not in configuration:
configuration["quantization"] = False
if "output_name" not in configuration:
configuration["output_name"] = "{{input_basename}}"
for input_filename in os.listdir("/config/model_cache/rknn_cache/onnx"):
for soc in configuration["soc"]:
quant = "i8" if configuration["quantization"] else "fp16"
input_path = "/config/model_cache/rknn_cache/onnx/" + input_filename
input_basename = input_filename[: input_filename.rfind(".")]
output_filename = (
configuration["output_name"].format(
quant=quant,
input_basename=input_basename,
soc=soc,
tk_version=tk_version,
)
+ ".rknn"
)
output_path = "/config/model_cache/rknn_cache/" + output_filename
rknn_config["target_platform"] = soc
rknn = RKNN(verbose=True)
rknn.config(**rknn_config)
if rknn.load_onnx(model=input_path) != 0:
raise Exception("Error loading model.")
if (
rknn.build(
do_quantization=configuration["quantization"],
dataset="/COCO/coco_subset_20.txt",
)
!= 0
):
raise Exception("Error building model.")
if rknn.export_rknn(output_path) != 0:
raise Exception("Error exporting rknn model.")

View File

@ -1 +1,2 @@
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/rknn_toolkit_lite2-2.0.0b0-cp39-cp39-linux_aarch64.whl
rknn-toolkit2 == 2.3.0
rknn-toolkit-lite2 == 2.3.0

View File

@ -34,7 +34,7 @@ RUN mkdir -p /opt/rocm-dist/etc/ld.so.conf.d/
RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM --platform=linux/amd64 debian:11 as debian-base
FROM --platform=linux/amd64 debian:12 as debian-base
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install --no-install-recommends libelf1 libdrm2 libdrm-amdgpu1 libnuma1 kmod
@ -51,7 +51,7 @@ COPY --from=rocm /opt/rocm-$ROCM /opt/rocm-$ROCM
RUN ln -s /opt/rocm-$ROCM /opt/rocm
RUN apt-get -y install g++ cmake
RUN apt-get -y install python3-pybind11 python3.9-distutils python3-dev
RUN apt-get -y install python3-pybind11 python3-distutils python3-dev
WORKDIR /opt/build
@ -70,10 +70,11 @@ RUN apt-get -y install libnuma1
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
RUN python3 -m pip install --upgrade pip \
&& pip3 uninstall -y onnxruntime-openvino \
&& pip3 install -r /requirements.txt
# Temporarily disabled to see if a new wheel can be built to support py3.11
#COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
#RUN python3 -m pip install --upgrade pip \
# && pip3 uninstall -y onnxruntime-openvino \
# && pip3 install -r /requirements.txt
#######################################################################
FROM scratch AS rocm-dist
@ -86,12 +87,12 @@ COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
COPY --from=debian-build /opt/rocm/lib/migraphx.cpython-39-x86_64-linux-gnu.so /opt/rocm-$ROCM/lib/
COPY --from=debian-build /opt/rocm/lib/migraphx.cpython-311-x86_64-linux-gnu.so /opt/rocm-$ROCM/lib/
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV HSA_ENABLE_SDMA=0
\
ENV HSA_ENABLE_SDMA=0
COPY --from=rocm-dist / /

View File

@ -18,13 +18,14 @@ apt-get -qq install --no-install-recommends -y \
mkdir -p -m 600 /root/.gnupg
# enable non-free repo
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
echo "deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list
apt update
# ffmpeg -> arm64
if [[ "${TARGETARCH}" == "arm64" ]]; then
# add raspberry pi repo
gpg --no-default-keyring --keyring /usr/share/keyrings/raspbian.gpg --keyserver keyserver.ubuntu.com --recv-keys 82B129927FA3303E
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bullseye main" | tee /etc/apt/sources.list.d/raspi.list
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bookworm main" | tee /etc/apt/sources.list.d/raspi.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y ffmpeg
fi

View File

@ -7,18 +7,19 @@ ARG DEBIAN_FRONTEND=noninteractive
FROM wheels as trt-wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
RUN python3 -m pip config set global.break-system-packages true
# Add TensorRT wheels to another folder
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
FROM tensorrt-base AS frigate-tensorrt
ENV TRT_VER=8.5.3
ENV TRT_VER=8.6.1
RUN python3 -m pip config set global.break-system-packages true
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl && \
pip3 install -U /deps/trt-wheels/*.whl --break-system-packages && \
ldconfig
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.9/dist-packages/tensorrt:/usr/local/cuda/lib64:/usr/local/lib/python3.9/dist-packages/nvidia/cufft/lib
WORKDIR /opt/frigate/
COPY --from=rootfs / /
@ -31,4 +32,4 @@ COPY --from=trt-deps /usr/local/cuda-12.1 /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl
pip3 install -U /deps/trt-wheels/*.whl --break-system-packages

View File

@ -41,11 +41,11 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
ADD https://nvidia.box.com/shared/static/9aemm4grzbbkfaesg5l7fplgjtmswhj8.whl /tmp/onnxruntime_gpu-1.15.1-cp39-cp39-linux_aarch64.whl
ADD https://nvidia.box.com/shared/static/psl23iw3bh7hlgku0mjo1xekxpego3e3.whl /tmp/onnxruntime_gpu-1.15.1-cp311-cp311-linux_aarch64.whl
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt \
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.15.1-cp39-cp39-linux_aarch64.whl
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.15.1-cp311-cp311-linux_aarch64.whl
FROM build-wheels AS trt-model-wheels
ARG DEBIAN_FRONTEND

View File

@ -3,7 +3,7 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.03-py3
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.12-py3
# Build TensorRT-specific library
FROM ${TRT_BASE} AS trt-deps

View File

@ -1,6 +1,8 @@
/usr/local/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.9/dist-packages/tensorrt
/usr/local/cuda/lib64
/usr/local/lib/python3.11/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.11/dist-packages/tensorrt
/usr/local/lib/python3.11/dist-packages/nvidia/cufft/lib

View File

@ -1,14 +1,14 @@
# NVidia TensorRT Support (amd64 only)
--extra-index-url 'https://pypi.nvidia.com'
numpy < 1.24; platform_machine == 'x86_64'
tensorrt == 8.5.3.*; platform_machine == 'x86_64'
cuda-python == 11.8; platform_machine == 'x86_64'
cython == 0.29.*; platform_machine == 'x86_64'
tensorrt == 8.6.1.*; platform_machine == 'x86_64'
cuda-python == 11.8.*; platform_machine == 'x86_64'
cython == 3.0.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12 == 12.1.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu11 == 11.8.*; platform_machine == 'x86_64'
nvidia-cublas-cu11 == 11.11.3.6; platform_machine == 'x86_64'
nvidia-cudnn-cu11 == 8.6.0.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12 == 9.5.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu11==10.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.18.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@ -4,7 +4,9 @@ title: Advanced Options
sidebar_label: Advanced Options
---
### `logger`
### Logging
#### Frigate `logger`
Change the default log level for troubleshooting purposes.
@ -28,6 +30,18 @@ Examples of available modules are:
- `watchdog.<camera_name>`
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
#### Go2RTC Logging
See [the go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#module-log) for logging configuration
```yaml
go2rtc:
streams:
...
log:
exec: trace
```
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
@ -189,16 +203,16 @@ When frigate starts up, it checks whether your config file is valid, and if it i
### Via API
Frigate can accept a new configuration file as JSON at the `/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
Frigate can accept a new configuration file as JSON at the `/api/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
```bash
curl -X POST http://frigate_host:5000/config/save -d @config.json
curl -X POST http://frigate_host:5000/api/config/save -d @config.json
```
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
```bash
yq r -j config.yml | curl -X POST http://frigate_host:5000/config/save -d @-
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
```
### Via Command Line

View File

@ -24,6 +24,11 @@ On startup, an admin user and password are generated and printed in the logs. It
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
```yaml
auth:
reset_admin_password: true
```
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).

View File

@ -167,3 +167,7 @@ To maintain object tracking during PTZ moves, Frigate tracks the motion of your
### Calibration seems to have completed, but the camera is not actually moving to track my object. Why?
Some cameras have firmware that reports that FOV RelativeMove, the ONVIF command that Frigate uses for autotracking, is supported. However, if the camera does not pan or tilt when an object comes into the required zone, your camera's firmware does not actually support FOV RelativeMove. One such camera is the Uniview IPC672LR-AX4DUPK. It actually moves its zoom motor instead of panning and tilting and does not follow the ONVIF standard whatsoever.
### Frigate reports an error saying that calibration has failed. Why?
Calibration measures the amount of time it takes for Frigate to make a series of movements with your PTZ. This error message is recorded in the log if these values are too high for Frigate to support calibrated autotracking. This is often the case when your camera's motor or network connection is too slow or your camera's firmware doesn't report the motor status in a timely manner. You can try running without calibration (just remove the `movement_weights` line from your config and restart), but if calibration fails, this often means that autotracking will behave unpredictably.

View File

@ -22,7 +22,7 @@ Note that mjpeg cameras require encoding the video into h264 for recording, and
```yaml
go2rtc:
streams:
mjpeg_cam: "ffmpeg:{your_mjpeg_stream_url}#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
mjpeg_cam: "ffmpeg:http://your_mjpeg_stream_url#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
cameras:
...
@ -65,19 +65,32 @@ ffmpeg:
## Model/vendor specific setup
### Amcrest & Dahua
Amcrest & Dahua cameras should be connected to via RTSP using the following format:
```
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=0 # this is the main stream
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=1 # this is the sub stream, typically supporting low resolutions only
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=2 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=3 # new higher end cameras support a fourth stream with another mid resolution (1280x720, 1920x1080)
```
### Annke C800
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be adjusted using the `apple_compatibility` config.
```yaml
cameras:
annkec800: # <------ Name the camera
ffmpeg:
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
record: preset-record-generic-audio-aac
inputs:
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
- path: rtsp://USERNAME:PASSWORD@CAMERA-IP/H264/ch1/main/av_stream # <----- Update for your camera
roles:
- detect
- record
@ -95,6 +108,29 @@ ffmpeg:
input_args: preset-rtsp-blue-iris
```
### Hikvision Cameras
Hikvision cameras should be connected to via RTSP using the following format:
```
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/101 # this is the main stream
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/102 # this is the sub stream, typically supporting low resolutions only
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/103 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
```
:::note
[Some users have reported](https://www.reddit.com/r/frigate_nvr/comments/1hg4ze7/hikvision_security_settings) that newer Hikvision cameras require adjustments to the security settings:
```
RTSP Authentication - digest/basic
RTSP Digest Algorithm - MD5
WEB Authentication - digest/basic
WEB Digest Algorithm - MD5
```
:::
### Reolink Cameras
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.

View File

@ -0,0 +1,59 @@
---
id: face_recognition
title: Face Recognition
---
Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.
Frigate has support for FaceNet to create face embeddings, which runs locally. Embeddings are then saved to Frigate's database.
## Minimum System Requirements
Face recognition works by running a large AI model locally on your system. Systems without a GPU will not run Face Recognition reliably or at all.
## Configuration
Face recognition is disabled by default and requires semantic search to be enabled, face recognition must be enabled in your config file before it can be used. Semantic Search and face recognition are global configuration settings.
```yaml
face_recognition:
enabled: true
```
## Dataset
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
- Diversity of the dataset: A dataset with diverse images, including variations in lighting, pose, and facial expressions, will require fewer images per person than a less diverse dataset.
- Desired accuracy: The higher the desired accuracy, the more images are typically needed.
However, here are some general guidelines:
- Minimum: For basic face recognition tasks, a minimum of 10-20 images per person is often recommended.
- Recommended: For more robust and accurate systems, 30-50 images per person is a good starting point.
- Ideal: For optimal performance, especially in challenging conditions, 100 or more images per person can be beneficial.
## Creating a Robust Training Set
The accuracy of face recognition is heavily dependent on the quality of data given to it for training. It is recommended to build the face training library in phases.
:::tip
When choosing images to include in the face training set it is recommended to always follow these recommendations:
- If it is difficult to make out details in a persons face it will not be helpful in training.
- Avoid images with under/over-exposure.
- Avoid blurry / pixelated images.
- Be careful when uploading images of people when they are wearing clothing that covers a lot of their face as this may confuse the training.
- Do not upload too many images at the same time, it is recommended to train 4-6 images for each person each day so it is easier to know if the previously added images helped or hurt performance.
:::
### Step 1 - Building a Strong Foundation
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-2 photos taken by a smartphone for each person. It is important that the person's face in the photo is straight-on and not turned which will ensure a good starting point.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle. Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step.
### Step 2 - Expanding The Dataset
Once straight-on images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone.

View File

@ -15,9 +15,9 @@ Semantic Search must be enabled to use Generative AI.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
@ -116,7 +116,7 @@ genai:
model: gpt-4o
```
::: note
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.

View File

@ -175,6 +175,16 @@ For more information on the various values across different distributions, see h
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
#### Stats for SR-IOV devices
When using virtualized GPUs via SR-IOV, additional args are needed for GPU stats to function. This can be enabled with the following config:
```yaml
telemetry:
stats:
sriov: True
```
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.

View File

@ -0,0 +1,88 @@
---
id: license_plate_recognition
title: License Plate Recognition (LPR)
---
Frigate can recognize license plates on vehicles and automatically add the detected characters as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street with a dedicated LPR camera.
Users running a Frigate+ model (or any custom model that natively detects license plates) should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
Users without a model that detects license plates can still run LPR. A small, CPU inference, YOLOv9 license plate detection model will be used instead. You should _not_ define `license_plate` in your list of objects to track.
LPR is most effective when the vehicles license plate is fully visible to the camera. For moving vehicles, Frigate will attempt to read the plate continuously, refining recognition and keeping the most confident result. LPR will not run on stationary vehicles.
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
```yaml
lpr:
enabled: True
```
## Advanced Configuration
Fine-tune the LPR feature using these optional parameters:
### Detection
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
- Default: `0.7`
- Note: If you are using a Frigate+ model and you set the `threshold` in your objects config for `license_plate` higher than this value, recognition will never run. It's best to ensure these values match, or this `detection_threshold` is lower than your object config `threshold`.
- **`min_area`**: Defines the minimum size (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels.
- Depending on the resolution of your cameras, you can increase this value to ignore small or distant plates.
### Recognition
- **`recognition_threshold`**: Recognition confidence score required to add the plate to the object as a sub label.
- Default: `0.9`.
- **`min_plate_length`**: Specifies the minimum number of characters a detected license plate must have to be added as a sub-label to an object.
- Use this to filter out short, incomplete, or incorrect detections.
- **`format`**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded.
- `"^[A-Z]{1,3} [A-Z]{1,2} [0-9]{1,4}$"` matches plates like "B AB 1234" or "M X 7"
- `"^[A-Z]{2}[0-9]{2} [A-Z]{3}$"` matches plates like "AB12 XYZ" or "XY68 ABC"
### Matching
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` objects when a recognized plate matches a known value.
- These labels appear in the UI, filters, and notifications.
- **`match_distance`**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate.
- For example, setting `match_distance: 1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`.
- This parameter will not operate on known plates that are defined as regular expressions. You should define the full string of your plate in `known_plates` in order to use `match_distance`.
### Examples
```yaml
lpr:
enabled: True
min_area: 1500 # Ignore plates smaller than 1500 pixels
min_plate_length: 4 # Only recognize plates with 4 or more characters
known_plates:
Wife's Car:
- "ABC-1234"
- "ABC-I234" # Accounts for potential confusion between the number one (1) and capital letter I
Johnny:
- "J*N-*234" # Matches JHN-1234 and JMN-I234, but also note that "*" matches any number of characters
Sally:
- "[S5]LL-1234" # Matches both SLL-1234 and 5LL-1234
```
```yaml
lpr:
enabled: True
min_area: 4000 # Run recognition on larger plates only
recognition_threshold: 0.85
format: "^[A-Z]{3}-[0-9]{4}$" # Only recognize plates that are three letters, followed by a dash, followed by 4 numbers
match_distance: 1 # Allow one character variation in plate matching
known_plates:
Delivery Van:
- "RJK-5678"
- "UPS-1234"
Employee Parking:
- "EMP-[0-9]{3}[A-Z]" # Matches plates like EMP-123A, EMP-456Z
```

View File

@ -3,9 +3,9 @@ id: live
title: Live View
---
Frigate intelligently displays your camera streams on the Live view dashboard. Your camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion is detected, cameras seamlessly switch to a live stream.
Frigate intelligently displays your camera streams on the Live view dashboard. By default, Frigate employs "smart streaming" where camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion or active objects are detected, cameras seamlessly switch to a live stream.
## Live View technologies
### Live View technologies
Frigate intelligently uses three different streaming technologies to display your camera streams on the dashboard and the single camera view, switching between available modes based on network bandwidth, player errors, or required features like two-way talk. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
@ -51,19 +51,32 @@ go2rtc:
- ffmpeg:rtsp://192.168.1.5:554/live0#video=copy
```
### Setting Stream For Live UI
### Setting Streams For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.
You can configure Frigate to allow manual selection of the stream you want to view in the Live UI. For example, you may want to view your camera's substream on mobile devices, but the full resolution stream on desktop devices. Setting the `live -> streams` list will populate a dropdown in the UI's Live view that allows you to choose between the streams. This stream setting is _per device_ and is saved in your browser's local storage.
Additionally, when creating and editing camera groups in the UI, you can choose the stream you want to use for your camera group's Live dashboard.
:::note
Frigate's default dashboard ("All Cameras") will always use the first entry you've defined in `streams:` when playing live streams from your cameras.
:::
Configure the `streams` option with a "friendly name" for your stream followed by the go2rtc stream name.
Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the `streams` configuration, only go2rtc stream names.
```yaml
go2rtc:
streams:
test_cam:
- rtsp://192.168.1.5:554/live0 # <- stream which supports video & aac audio.
- rtsp://192.168.1.5:554/live_main # <- stream which supports video & aac audio.
- "ffmpeg:test_cam#audio=opus" # <- copy of the stream which transcodes audio to opus for webrtc
test_cam_sub:
- rtsp://192.168.1.5:554/substream # <- stream which supports video & aac audio.
- "ffmpeg:test_cam_sub#audio=opus" # <- copy of the stream which transcodes audio to opus for webrtc
- rtsp://192.168.1.5:554/live_sub # <- stream which supports video & aac audio.
test_cam_another_sub:
- rtsp://192.168.1.5:554/live_alt # <- stream which supports video & aac audio.
cameras:
test_cam:
@ -80,7 +93,10 @@ cameras:
roles:
- detect
live:
stream_name: test_cam_sub
streams: # <--- Multiple streams for Frigate 0.16 and later
Main Stream: test_cam # <--- Specify a "friendly name" followed by the go2rtc stream name
Sub Stream: test_cam_sub
Special Stream: test_cam_another_sub
```
### WebRTC extra configuration:
@ -101,6 +117,7 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that WebRTC does not support H.265.
:::tip
@ -148,3 +165,50 @@ For devices that support two way talk, Frigate can be configured to use the feat
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
### Streaming options on camera group dashboards
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
- Stream selection using the `live -> streams` configuration option (see _Setting Streams For Live UI_ above)
- Streaming type:
- _No streaming_: Camera images will only update once per minute and no live streaming will occur.
- _Smart Streaming_ (default, recommended setting): Smart streaming will update your camera image once per minute when no detectable activity is occurring to conserve bandwidth and resources, since a static picture is the same as a streaming image with no motion or objects. When motion or objects are detected, the image seamlessly switches to a live stream.
- _Continuous Streaming_: Camera image will always be a live stream when visible on the dashboard, even if no activity is being detected. Continuous streaming may cause high bandwidth usage and performance issues. **Use with caution.**
- _Compatibility mode_: Enable this option only if your camera's live stream is displaying color artifacts and has a diagonal line on the right side of the image. Before enabling this, try setting your camera's `detect` width and height to a standard aspect ratio (for example: 640x352 becomes 640x360, and 800x443 becomes 800x450, 2688x1520 becomes 2688x1512, etc). Depending on your browser and device, more than a few cameras in compatibility mode may not be supported, so only use this option if changing your config fails to resolve the color artifacts and diagonal line.
:::note
The default dashboard ("All Cameras") will always use Smart Streaming and the first entry set in your `streams` configuration, if defined. Use a camera group if you want to change any of these settings from the defaults.
:::
## Live view FAQ
1. Why don't I have audio in my Live view?
You must use go2rtc to hear audio in your live streams. If you have go2rtc already configured, you need to ensure your camera is sending PCMA/PCMU or AAC audio. If you can't change your camera's audio codec, you need to [transcode the audio](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg) using go2rtc.
Note that the low bandwidth mode player is a video-only stream. You should not expect to hear audio when in low bandwidth mode, even if you've set up go2rtc.
2. Frigate shows that my live stream is in "low bandwidth mode". What does this mean?
Frigate intelligently selects the live streaming technology based on a number of factors (user-selected modes like two-way talk, camera settings, browser capabilities, available bandwidth) and prioritizes showing an actual up-to-date live view of your camera's stream as quickly as possible.
When you have go2rtc configured, Live view initially attempts to load and play back your stream with a clearer, fluent stream technology (MSE). An initial timeout, a low bandwidth condition that would cause buffering of the stream, or decoding errors in the stream will cause Frigate to switch to the stream defined by the `detect` role, using the jsmpeg format. This is what the UI labels as "low bandwidth mode". On Live dashboards, the mode will automatically reset when smart streaming is configured and activity stops. You can also try using the _Reset_ button to force a reload of your stream.
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the recommendations above or ensure you have enough bandwidth available.
3. It doesn't seem like my cameras are streaming on the Live dashboard. Why?
On the default Live dashboard ("All Cameras"), your camera images will update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any activity is detected, cameras seamlessly switch to a full-resolution live stream. If you want to customize this behavior, use a camera group.
4. I see a strange diagonal line on my live view, but my recordings look fine. How can I fix it?
This is caused by incorrect dimensions set in your detect width or height (or incorrectly auto-detected), causing the jsmpeg player's rendering engine to display a slightly distorted image. You should enlarge the width and height of your `detect` resolution up to a standard aspect ratio (example: 640x352 becomes 640x360, and 800x443 becomes 800x450, 2688x1520 becomes 2688x1512, etc). If changing the resolution to match a standard (4:3, 16:9, or 32:9, etc) aspect ratio does not solve the issue, you can enable "compatibility mode" in your camera group dashboard's stream settings. Depending on your browser and device, more than a few cameras in compatibility mode may not be supported, so only use this option if changing your `detect` width and height fails to resolve the color artifacts and diagonal line.
5. How does "smart streaming" work?
Because a static image of a scene looks exactly the same as a live stream with no motion or activity, smart streaming updates your camera images once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any activity (motion or object/audio detection) occurs, cameras seamlessly switch to a live stream.
This static image is pulled from the stream defined in your config with the `detect` role. When activity is detected, images from the `detect` stream immediately begin updating at ~5 frames per second so you can see the activity until the live player is loaded and begins playing. This usually only takes a second or two. If the live player times out, buffers, or has streaming errors, the jsmpeg player is loaded and plays a video-only stream from the `detect` role. When activity ends, the players are destroyed and a static image is displayed until activity is detected again, and the process repeats.
This is Frigate's default and recommended setting because it results in a significant bandwidth savings, especially for high resolution cameras.
6. I have unmuted some cameras on my dashboard, but I do not hear sound. Why?
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.

View File

@ -0,0 +1,99 @@
---
id: metrics
title: Metrics
---
# Metrics
Frigate exposes Prometheus metrics at the `/api/metrics` endpoint that can be used to monitor the performance and health of your Frigate instance.
## Available Metrics
### System Metrics
- `frigate_cpu_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process CPU usage percentage
- `frigate_mem_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process memory usage percentage
- `frigate_gpu_usage_percent{gpu_name=""}` - GPU utilization percentage
- `frigate_gpu_mem_usage_percent{gpu_name=""}` - GPU memory usage percentage
### Camera Metrics
- `frigate_camera_fps{camera_name=""}` - Frames per second being consumed from your camera
- `frigate_detection_fps{camera_name=""}` - Number of times detection is run per second
- `frigate_process_fps{camera_name=""}` - Frames per second being processed
- `frigate_skipped_fps{camera_name=""}` - Frames per second skipped for processing
- `frigate_detection_enabled{camera_name=""}` - Detection enabled status for camera
- `frigate_audio_dBFS{camera_name=""}` - Audio dBFS for camera
- `frigate_audio_rms{camera_name=""}` - Audio RMS for camera
### Detector Metrics
- `frigate_detector_inference_speed_seconds{name=""}` - Time spent running object detection in seconds
- `frigate_detection_start{name=""}` - Detector start time (unix timestamp)
### Storage Metrics
- `frigate_storage_free_bytes{storage=""}` - Storage free bytes
- `frigate_storage_total_bytes{storage=""}` - Storage total bytes
- `frigate_storage_used_bytes{storage=""}` - Storage used bytes
- `frigate_storage_mount_type{mount_type="", storage=""}` - Storage mount type info
### Service Metrics
- `frigate_service_uptime_seconds` - Uptime in seconds
- `frigate_service_last_updated_timestamp` - Stats recorded time (unix timestamp)
- `frigate_device_temperature{device=""}` - Device Temperature
### Event Metrics
- `frigate_camera_events{camera="", label=""}` - Count of camera events since exporter started
## Configuring Prometheus
To scrape metrics from Frigate, add the following to your Prometheus configuration:
```yaml
scrape_configs:
- job_name: 'frigate'
metrics_path: '/api/metrics'
static_configs:
- targets: ['frigate:5000']
scrape_interval: 15s
```
## Example Queries
Here are some example PromQL queries that might be useful:
```promql
# Average CPU usage across all processes
avg(frigate_cpu_usage_percent)
# Total GPU memory usage
sum(frigate_gpu_mem_usage_percent)
# Detection FPS by camera
rate(frigate_detection_fps{camera_name="front_door"}[5m])
# Storage usage percentage
(frigate_storage_used_bytes / frigate_storage_total_bytes) * 100
# Event count by camera in last hour
increase(frigate_camera_events[1h])
```
## Grafana Dashboard
You can use these metrics to create Grafana dashboards to monitor your Frigate instance. Here's an example of metrics you might want to track:
- CPU, Memory and GPU usage over time
- Camera FPS and detection rates
- Storage usage and trends
- Event counts by camera
- System temperatures
A sample Grafana dashboard JSON will be provided in a future update.
## Metric Types
The metrics exposed by Frigate use the following Prometheus metric types:
- **Counter**: Cumulative values that only increase (e.g., `frigate_camera_events`)
- **Gauge**: Values that can go up and down (e.g., `frigate_cpu_usage_percent`)
- **Info**: Key-value pairs for metadata (e.g., `frigate_storage_mount_type`)
For more information about Prometheus metric types, see the [Prometheus documentation](https://prometheus.io/docs/concepts/metric_types/).

View File

@ -33,6 +33,14 @@ Frigate supports multiple different detectors that work on different types of ha
:::
:::note
Multiple detectors can not be mixed for object detection (ex: OpenVINO and Coral EdgeTPU can not be used for object detection at the same time).
This does not affect using hardware for accelerating other tasks such as [semantic search](./semantic_search.md)
:::
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
@ -116,6 +124,30 @@ detectors:
device: pci
```
## Hailo-8l
This detector is available for use with Hailo-8 AI Acceleration Module.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
### Configuration
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
```
## OpenVINO Detector
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
@ -169,15 +201,7 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@ -199,13 +223,43 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### YOLOv9
[YOLOv9](https://github.com/MultimediaTechLab/YOLO) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
ov:
type: openvino
device: GPU
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## NVidia TensorRT Detector
Nvidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt`. This detector is designed to work with Yolo models for object detection.
### Minimum Hardware Support
The TensorRT detector uses the 12.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=530`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
The TensorRT detector uses the 12.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
To use the TensorRT detector, make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
@ -233,6 +287,8 @@ If your GPU does not support FP16 operations, you can pass the environment varia
Specific models can be selected by passing an environment variable to the `docker run` command or in your `docker-compose.yml` file. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below.
<details>
<summary>Available Models</summary>
```
yolov3-288
yolov3-416
@ -261,6 +317,7 @@ yolov7-320
yolov7x-640
yolov7x-320
```
</details>
An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yolov7x-640` models for a Pascal card would look something like this:
@ -388,15 +445,7 @@ There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@ -418,7 +467,7 @@ Note that the labelmap uses a subset of the complete COCO label set that has onl
## ONNX
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
:::info
@ -458,15 +507,7 @@ There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@ -485,6 +526,33 @@ model:
labelmap_path: /labelmap/coco-80.txt
```
#### YOLOv9
[YOLOv9](https://github.com/MultimediaTechLab/YOLO) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)
@ -550,7 +618,7 @@ Hardware accelerated object detection is supported on the following SoCs:
- RK3576
- RK3588
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.0.0.beta0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.3.0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
### Prerequisites
@ -625,25 +693,56 @@ $ cat /sys/kernel/debug/rknpu/load
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
## Hailo-8l
### Converting your own onnx model to rknn format
This detector is available for use with Hailo-8 AI Acceleration Module.
To convert a onnx model to the rknn format using the [rknn-toolkit2](https://github.com/airockchip/rknn-toolkit2/) you have to:
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
- Place one ore more models in onnx format in the directory `config/model_cache/rknn_cache/onnx` on your docker host (this might require `sudo` privileges).
- Save the configuration file under `config/conv2rknn.yaml` (see below for details).
- Run `docker exec <frigate_container_id> python3 /opt/conv2rknn.py`. If the conversion was successful, the rknn models will be placed in `config/model_cache/rknn_cache`.
### Configuration
This is an example configuration file that you need to adjust to your specific onnx model:
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
soc: ["rk3562","rk3566", "rk3568", "rk3576", "rk3588"]
quantization: false
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
output_name: "{input_basename}"
config:
mean_values: [[0, 0, 0]]
std_values: [[255, 255, 255]]
quant_img_rgb2bgr: true
```
Explanation of the paramters:
- `soc`: A list of all SoCs you want to build the rknn model for. If you don't specify this parameter, the script tries to find out your SoC and builds the rknn model for this one.
- `quantization`: true: 8 bit integer (i8) quantization, false: 16 bit float (fp16). Default: false.
- `output_name`: The output name of the model. The following variables are available:
- `quant`: "i8" or "fp16" depending on the config
- `input_basename`: the basename of the input model (e.g. "my_model" if the input model is calles "my_model.onnx")
- `soc`: the SoC this model was build for (e.g. "rk3588")
- `tk_version`: Version of `rknn-toolkit2` (e.g. "2.3.0")
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.0_EN.pdf).
# Models
Some model types are not included in Frigate by default.
## Downloading Models
Here are some tips for getting different model types
### Downloading YOLO-NAS Model
You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.

View File

@ -34,7 +34,7 @@ False positives can also be reduced by filtering a detection based on its shape.
### Object Area
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter.
`min_area` and `max_area` filter on the area of an objects bounding box and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter. These values can either be in pixels or as a percentage of the frame (for example, 0.12 represents 12% of the frame).
### Object Proportions

View File

@ -46,6 +46,11 @@ mqtt:
tls_insecure: false
# Optional: interval in seconds for publishing stats (default: shown below)
stats_interval: 60
# Optional: QoS level for subscriptions and publishing (default: shown below)
# 0 = at most once
# 1 = at least once
# 2 = exactly once
qos: 0
# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
@ -244,6 +249,8 @@ ffmpeg:
# If set too high, then if a ffmpeg crash or camera stream timeout occurs, you could potentially lose up to a maximum of retry_interval second(s) of footage
# NOTE: this can be a useful setting for Wireless / Battery cameras to reduce how much footage is potentially lost during a connection timeout.
retry_interval: 10
# Optional: Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players. (default: shown below)
apple_compatibility: false
# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
@ -310,9 +317,11 @@ objects:
# Optional: filters to reduce false positives for specific object types
filters:
person:
# Optional: minimum width*height of the bounding box for the detected object (default: 0)
# Optional: minimum size of the bounding box for the detected object (default: 0).
# Can be specified as an integer for width*height in pixels or as a decimal representing the percentage of the frame (0.000001 to 0.99).
min_area: 5000
# Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
# Optional: maximum size of the bounding box for the detected object (default: 24000000).
# Can be specified as an integer for width*height in pixels or as a decimal representing the percentage of the frame (0.000001 to 0.99).
max_area: 100000
# Optional: minimum width/height of the bounding box for the detected object (default: 0)
min_ratio: 0.5
@ -331,6 +340,8 @@ objects:
review:
# Optional: alerts configuration
alerts:
# Optional: enables alerts for the camera (default: shown below)
enabled: True
# Optional: labels that qualify as an alert (default: shown below)
labels:
- car
@ -343,6 +354,8 @@ review:
- driveway
# Optional: detections configuration
detections:
# Optional: enables detections for the camera (default: shown below)
enabled: True
# Optional: labels that qualify as a detection (default: all labels that are tracked / listened to)
labels:
- car
@ -400,6 +413,7 @@ motion:
mqtt_off_delay: 30
# Optional: Notification Configuration
# NOTE: Can be overridden at the camera level (except email)
notifications:
# Optional: Enable notification service (default: shown below)
enabled: False
@ -524,6 +538,33 @@ semantic_search:
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for face recognition capability
face_recognition:
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for license plate recognition capability
lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: License plate object confidence score required to begin running recognition (default: shown below)
detection_threshold: 0.7
# Optional: Minimum area of license plate to begin running recognition (default: shown below)
min_area: 1000
# Optional: Recognition confidence score required to add the plate to the object as a sub label (default: shown below)
recognition_threshold: 0.9
# Optional: Minimum number of characters a license plate must have to be added to the object as a sub label (default: shown below)
min_plate_length: 4
# Optional: Regular expression for the expected format of a license plate (default: shown below)
format: None
# Optional: Allow this number of missing/incorrect characters to still cause a detected plate to match a known plate
match_distance: 1
# Optional: Known plates to track (strings or regular expressions) (default: shown below)
known_plates: {}
# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.
# WARNING: Depending on the provider, this will send thumbnails over the internet
@ -549,16 +590,18 @@ genai:
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
# Optional: Live stream configuration for WebUI.
# NOTE: Can be overridden at the camera level
live:
# Optional: Set the name of the stream configured in go2rtc
# Optional: Set the streams configured in go2rtc
# that should be used for live view in frigate WebUI. (default: name of camera)
# NOTE: In most cases this should be set at the camera level only.
stream_name: camera_name
streams:
main_stream: main_stream_name
sub_stream: sub_stream_name
# Optional: Set the height of the jsmpeg stream. (default: 720)
# This must be less than or equal to the height of the detect stream. Lower resolutions
# reduce bandwidth required for viewing the jsmpeg stream. Width is computed to match known aspect ratio.
@ -643,7 +686,10 @@ cameras:
front_steps:
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.284,0.997,0.389,0.869,0.410,0.745
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
# Optional: The real-world distances of a 4-sided zone used for zones with speed estimation enabled (default: none)
# List distances in order of the zone points coordinates and use the unit system defined in the ui config
distances: 10,15,12,11
# Optional: Number of consecutive frames required for object to be considered present in the zone (default: shown below).
inertia: 3
# Optional: Number of seconds that an object must loiter to be considered in the zone (default: shown below)
@ -794,6 +840,9 @@ ui:
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
# Used in the UI and in MQTT topics
unit_system: metric
# Optional: Telemetry configuration
telemetry:
@ -807,11 +856,13 @@ telemetry:
- lo
# Optional: Configure system stats
stats:
# Enable AMD GPU stats (default: shown below)
# Optional: Enable AMD GPU stats (default: shown below)
amd_gpu_stats: True
# Enable Intel GPU stats (default: shown below)
# Optional: Enable Intel GPU stats (default: shown below)
intel_gpu_stats: True
# Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# Optional: Treat GPU as SR-IOV to fix GPU stats (default: shown below)
sriov: False
# Optional: Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# NOTE: The container must either be privileged or have cap_net_admin, cap_net_raw capabilities enabled.
network_bandwidth: False
# Optional: Enable the latest version outbound check (default: shown below)

View File

@ -1,6 +1,6 @@
---
id: semantic_search
title: Using Semantic Search
title: Semantic Search
---
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.

View File

@ -122,16 +122,59 @@ cameras:
- car
```
### Loitering Time
### Speed Estimation
Zones support a `loitering_time` configuration which can be used to only consider an object as part of a zone if they loiter in the zone for the specified number of seconds. This can be used, for example, to create alerts for cars that stop on the street but not cars that just drive past your camera.
Frigate can be configured to estimate the speed of objects moving through a zone. This works by combining data from Frigate's object tracker and "real world" distance measurements of the edges of the zone. The recommended use case for this feature is to track the speed of vehicles on a road as they move through the zone.
Your zone must be defined with exactly 4 points and should be aligned to the ground where objects are moving.
![Ground plane 4-point zone](/img/ground-plane.jpg)
Speed estimation requires a minimum number of frames for your object to be tracked before a valid estimate can be calculated, so create your zone away from places where objects enter and exit for the best results. _Your zone should not take up the full frame._ An object's speed is tracked while it is in the zone and then saved to Frigate's database.
Accurate real-world distance measurements are required to estimate speeds. These distances can be specified in your zone config through the `distances` field.
```yaml
cameras:
name_of_your_camera:
zones:
front_yard:
loitering_time: 5 # unit is in seconds
objects:
- person
street:
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
distances: 10,12,11,13.5
```
Each number in the `distance` field represents the real-world distance between the points in the `coordinates` list. So in the example above, the distance between the first two points ([0.033,0.306] and [0.324,0.138]) is 10. The distance between the second and third set of points ([0.324,0.138] and [0.439,0.185]) is 12, and so on. The fastest and most accurate way to configure this is through the Zone Editor in the Frigate UI.
The `distance` values are measured in meters or feet, depending on how `unit_system` is configured in your `ui` config:
```yaml
ui:
# can be "metric" or "imperial", default is metric
unit_system: metric
```
The average speed of your object as it moved through your zone is saved in Frigate's database and can be seen in the UI in the Tracked Object Details pane in Explore. Current estimated speed can also be seen on the debug view as the third value in the object label (see the caveats below). Current estimated speed, average estimated speed, and velocity angle (the angle of the direction the object is moving relative to the frame) of tracked objects is also sent through the `events` MQTT topic. See the [MQTT docs](../integrations/mqtt.md#frigateevents). These speed values are output as a number in miles per hour (mph) or kilometers per hour (kph), depending on how `unit_system` is configured in your `ui` config.
#### Best practices and caveats
- Speed estimation works best with a straight road or path when your object travels in a straight line across that path. Avoid creating your zone near intersections or anywhere that objects would make a turn. If the bounding box changes shape (either because the object made a turn or became partially obscured, for example), speed estimation will not be accurate.
- Create a zone where the bottom center of your object's bounding box travels directly through it and does not become obscured at any time. See the photo example above.
- Depending on the size and location of your zone, you may want to decrease the zone's `inertia` value from the default of 3.
- The more accurate your real-world dimensions can be measured, the more accurate speed estimation will be. However, due to the way Frigate's tracking algorithm works, you may need to tweak the real-world distance values so that estimated speeds better match real-world speeds.
- Once an object leaves the zone, speed accuracy will likely decrease due to perspective distortion and misalignment with the calibrated area. Therefore, speed values will show as a zero through MQTT and will not be visible on the debug view when an object is outside of a speed tracking zone.
- The speeds are only an _estimation_ and are highly dependent on camera position, zone points, and real-world measurements. This feature should not be used for law enforcement.
### Speed Threshold
Zones can be configured with a minimum speed requirement, meaning an object must be moving at or above this speed to be considered inside the zone. Zone `distances` must be defined as described above.
```yaml
cameras:
name_of_your_camera:
zones:
sidewalk:
coordinates: ...
distances: ...
inertia: 1
speed_threshold: 20 # unit is in kph or mph, depending on how unit_system is set (see above)
```

View File

@ -13,20 +13,19 @@ Many users have reported various issues with Reolink cameras, so I do not recomm
Here are some of the camera's I recommend:
- <a href="https://amzn.to/3uFLtxB" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) T5442TM-AS-LED</a> (affiliate link)
- <a href="https://amzn.to/3isJ3gU" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T5442TM-AS</a> (affiliate link)
- <a href="https://amzn.to/2ZWNWIA" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-28MM</a> (affiliate link)
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
## Server
My current favorite is the Beelink EQ12 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| Beelink EQ12 (<a href="https://amzn.to/3OlTMJY" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
| Intel NUC (<a href="https://amzn.to/3psFlHi" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Overkill for most, but great performance. Can handle many cameras at 5fps depending on typical amounts of motion. Requires extra parts. |
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4iQaBKu" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
## Detectors
@ -52,24 +51,25 @@ The OpenVINO detector type is able to run on:
More information is available [in the detector docs](/configuration/object_detectors#openvino-detector)
Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below:
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | Inference Speed | Notes |
| -------------------- | --------------- | --------------------------------------------------------------------- |
| Intel NCS2 VPU | 60 - 65 ms | May vary based on host device |
| Intel Celeron J4105 | ~ 25 ms | Inference speeds on CPU were 150 - 200 ms |
| Intel Celeron N3060 | 130 - 150 ms | Inference speeds on CPU were ~ 550 ms |
| Intel Celeron N3205U | ~ 120 ms | Inference speeds on CPU were ~ 380 ms |
| Intel Celeron N4020 | 50 - 200 ms | Inference speeds on CPU were ~ 800 ms, greatly depends on other loads |
| Intel i3 6100T | 15 - 35 ms | Inference speeds on CPU were 60 - 120 ms |
| Intel i3 8100 | ~ 15 ms | Inference speeds on CPU were ~ 65 ms |
| Intel i5 4590 | ~ 20 ms | Inference speeds on CPU were ~ 230 ms |
| Intel i5 6500 | ~ 15 ms | Inference speeds on CPU were ~ 150 ms |
| Intel i5 7200u | 15 - 25 ms | Inference speeds on CPU were ~ 150 ms |
| Intel i5 7500 | ~ 15 ms | Inference speeds on CPU were ~ 260 ms |
| Intel i5 1135G7 | 10 - 15 ms | |
| Intel i5 12600K | ~ 15 ms | Inference speeds on CPU were ~ 35 ms |
| Intel Arc A750 | ~ 4 ms | |
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | Notes |
| -------------------- | -------------------------- | ------------------------- | -------------------------------------- |
| Intel Celeron J4105 | ~ 25 ms | | Can only run one detector instance |
| Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance |
| Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance |
| Intel Celeron N4020 | 50 - 200 ms | | Inference speed depends on other loads |
| Intel i3 6100T | 15 - 35 ms | | Can only run one detector instance |
| Intel i3 8100 | ~ 15 ms | | |
| Intel i5 4590 | ~ 20 ms | | |
| Intel i5 6500 | ~ 15 ms | | |
| Intel i5 7200u | 15 - 25 ms | | |
| Intel i5 7500 | ~ 15 ms | | |
| Intel i5 1135G7 | 10 - 15 ms | | |
| Intel i3 12000 | | 320: ~ 19 ms 640: ~ 54 ms | |
| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | |
### TensorRT - Nvidia GPU
@ -78,29 +78,35 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
| Name | Inference Speed |
| --------------- | --------------- |
| GTX 1060 6GB | ~ 7 ms |
| GTX 1070 | ~ 6 ms |
| GTX 1660 SUPER | ~ 4 ms |
| RTX 3050 | 5 - 7 ms |
| RTX 3070 Mobile | ~ 5 ms |
| Quadro P400 2GB | 20 - 25 ms |
| Quadro P2000 | ~ 12 ms |
| Name | YoloV7 Inference Time | YOLO-NAS Inference Time |
| --------------- | --------------------- | ------------------------- |
| GTX 1060 6GB | ~ 7 ms | |
| GTX 1070 | ~ 6 ms | |
| GTX 1660 SUPER | ~ 4 ms | |
| RTX 3050 | 5 - 7 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 Mobile | ~ 5 ms | |
| Quadro P400 2GB | 20 - 25 ms | |
| Quadro P2000 | ~ 12 ms | |
#### AMD GPUs
### AMD GPUs
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many AMD GPUs.
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
### Community Supported:
### Hailo-8l PCIe
#### Nvidia Jetson
Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
## Community Supported Detectors
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
#### Rockchip platform
### Rockchip platform
Frigate supports hardware video processing on all Rockchip boards. However, hardware object detection is only supported on these boards:
@ -112,12 +118,6 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
#### Hailo-8l PCIe
Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

View File

@ -111,13 +111,13 @@ For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simpl
For other installations, follow these steps for installation:
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/41c9b13d2fffce508b32dfc971fa529b49295fbd/docker/hailo8l/user_installation.sh).
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
4. Run the script with `./user_installation.sh`
#### Setup
To set up Frigate, follow the default installation instructions, but use a Docker image with the `-h8l` suffix, for example: `ghcr.io/blakeblackshear/frigate:stable-h8l`
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:

View File

@ -7,7 +7,7 @@ title: Configuring go2rtc
Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect directly to your cameras. However, adding go2rtc to your configuration is required for the following features:
- WebRTC or MSE for live viewing with higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream
- WebRTC or MSE for live viewing with audio, higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream and does not support audio
- Live stream support for cameras in Home Assistant Integration
- RTSP relay for use with other consumers to reduce the number of connections to your camera streams

View File

@ -97,13 +97,13 @@ services:
If you are using HassOS with the addon, the URL should be one of the following depending on which addon version you are using. Note that if you are using the Proxy Addon, you do NOT point the integration at the proxy URL. Just enter the URL used to access Frigate directly from your network.
| Addon Version | URL |
| ------------------------------ | -------------------------------------- |
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
| Addon Version | URL |
| ------------------------------ | ----------------------------------------- |
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
### Frigate running on a separate machine
@ -301,3 +301,7 @@ which server they are referring to.
#### If I am detecting multiple objects, how do I assign the correct `binary_sensor` to the camera in HomeKit?
The [HomeKit integration](https://www.home-assistant.io/integrations/homekit/) randomly links one of the binary sensors (motion sensor entities) grouped with the camera device in Home Assistant. You can specify a `linked_motion_sensor` in the Home Assistant [HomeKit configuration](https://www.home-assistant.io/integrations/homekit/#linked_motion_sensor) for each camera.
#### I have set up automations based on the occupancy sensors. Sometimes the automation runs because the sensors are turned on, but then I look at Frigate I can't find the object that triggered the sensor. Is this a bug?
No. The occupancy sensors have fewer checks in place because they are often used for things like turning the lights on where latency needs to be as low as possible. So false positives can sometimes trigger these sensors. If you want false positive filtering, you should use an mqtt sensor on the `frigate/events` or `frigate/reviews` topic.

View File

@ -52,7 +52,9 @@ Message published for each changed tracked object. The first message is publishe
"attributes": {
"face": 0.64
}, // attributes with top score that have been identified on the object at any point
"current_attributes": [] // detailed data about the current attributes in this frame
"current_attributes": [], // detailed data about the current attributes in this frame
"current_estimated_speed": 0.71, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180 // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
},
"after": {
"id": "1607123955.475377-mxklsc",
@ -89,7 +91,9 @@ Message published for each changed tracked object. The first message is publishe
"box": [442, 506, 534, 524],
"score": 0.86
}
]
],
"current_estimated_speed": 0.77, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180 // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
}
}
```
@ -312,6 +316,22 @@ Topic with current state of the PTZ autotracker for a camera. Published values a
Topic to determine if PTZ autotracker is actively tracking an object. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_alerts/set`
Topic to turn review alerts for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/review_alerts/state`
Topic with current state of review alerts for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_detections/set`
Topic to turn review detections for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/review_detections/state`
Topic with current state of review detections for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/birdseye/set`
Topic to turn Birdseye for a camera on and off. Expected values are `ON` and `OFF`. Birdseye mode
@ -337,3 +357,19 @@ the camera to be removed from the view._
### `frigate/<camera_name>/birdseye_mode/state`
Topic with current state of the Birdseye mode for a camera. Published values are `CONTINUOUS`, `MOTION`, `OBJECTS`.
### `frigate/<camera_name>/notifications/set`
Topic to turn notifications on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/notifications/state`
Topic with current state of notifications. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/notifications/suspend`
Topic to suspend notifications for a certain number of minutes. Expected value is an integer.
### `frigate/<camera_name>/notifications/suspended`
Topic with timestamp that notifications are suspended until. Published value is a UNIX timestamp, or 0 if notifications are not suspended.

View File

@ -19,6 +19,10 @@ Please use your own knowledge to assess and vet them before you install anything
It supports automatically setting the sub labels in Frigate for person objects that are detected and recognized.
This is a fork (with fixed errors and new features) of [original Double Take](https://github.com/jakowenko/double-take) project which, unfortunately, isn't being maintained by author.
## [Frigate Notify](https://github.com/0x2142/frigate-notify)
[Frigate Notify](https://github.com/0x2142/frigate-notify) is a simple app designed to send notifications from Frigate NVR to your favorite platforms. Intended to be used with standalone Frigate installations - Home Assistant not required, MQTT is optional but recommended.
## [Frigate telegram](https://github.com/OldTyT/frigate-telegram)
[Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail.

View File

@ -5,7 +5,7 @@ title: Requesting your first model
## Step 1: Upload and annotate your images
Before requesting your first model, you will need to upload and verify at least 1 image to Frigate+. The more images you upload, annotate, and verify the better your results will be. Most users start to see very good results once they have at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. Refer to the [integration docs](../integrations/plus.md#generate-an-api-key) for instructions on how to easily submit images to Frigate+ directly from Frigate.
Before requesting your first model, you will need to upload and verify at least 10 images to Frigate+. The more images you upload, annotate, and verify the better your results will be. Most users start to see very good results once they have at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. Refer to the [integration docs](../integrations/plus.md#generate-an-api-key) for instructions on how to easily submit images to Frigate+ directly from Frigate.
It is recommended to submit **both** true positives and false positives. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.

View File

@ -13,7 +13,7 @@ You may find that Frigate+ models result in more false positives initially, but
For the best results, follow the following guidelines.
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused.
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused. You can exclude labels that you don't want detected on any of your cameras.
**Make tight bounding boxes**: Tighter bounding boxes improve the recognition and ensure that accurate bounding boxes are predicted at runtime.
@ -21,7 +21,7 @@ For the best results, follow the following guidelines.
**Label objects hard to identify as difficult**: When objects are truly difficult to make out, such as a car barely visible through a bush, or a dog that is hard to distinguish from the background at night, flag it as 'difficult'. This is not used in the model training as of now, but will in the future.
**`amazon`, `ups`, and `fedex` should label the logo**: For a Fedex truck, label the truck as a `car` and make a different bounding box just for the Fedex logo. If there are multiple logos, label each of them.
**Delivery logos such as `amazon`, `ups`, and `fedex` should label the logo**: For a Fedex truck, label the truck as a `car` and make a different bounding box just for the Fedex logo. If there are multiple logos, label each of them.
![Fedex Logo](/img/plus/fedex-logo.jpg)

View File

@ -17,7 +17,7 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
## Available model types
There are two model types offered in Frigate+: `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types).
@ -32,7 +32,7 @@ Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVi
:::warning
Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15, which is still under development.
Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15 and later.
:::
@ -48,11 +48,19 @@ _\* Requires Frigate 0.15_
## Available label types
Frigate+ models support a more relevant set of objects for security cameras. Currently, only the following objects are supported: `person`, `face`, `car`, `license_plate`, `amazon`, `ups`, `fedex`, `package`, `dog`, `cat`, `deer`. Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
Frigate+ models support a more relevant set of objects for security cameras. Currently, the following objects are supported:
- **People**: `person`, `face`
- **Vehicles**: `car`, `motorcycle`, `bicycle`, `boat`, `license_plate`
- **Delivery Logos**: `amazon`, `usps`, `ups`, `fedex`, `dhl`, `an_post`, `purolator`, `postnl`, `nzpost`, `postnord`, `gls`, `dpd`
- **Animals**: `dog`, `cat`, `deer`, `horse`, `bird`, `raccoon`, `fox`, `bear`, `cow`, `squirrel`, `goat`, `rabbit`
- **Other**: `package`, `waste_bin`, `bbq_grill`, `robot_lawnmower`, `umbrella`
Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
### Label attributes
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate review items directly. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, and delivery logos such as `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate review items directly. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
In order to have Frigate start using these attribute labels, you will need to add them to the list of objects to track:
@ -75,6 +83,6 @@ When using Frigate+ models, Frigate will choose the snapshot of a person object
![Face Attribute](/img/plus/attribute-example-face.jpg)
`amazon`, `ups`, and `fedex` labels are used to automatically assign a sub label to car objects.
Delivery logos such as `amazon`, `ups`, and `fedex` labels are used to automatically assign a sub label to car objects.
![Fedex Attribute](/img/plus/attribute-example-fedex.jpg)

View File

@ -54,6 +54,21 @@ The most common reason for the PCIe Coral not being detected is that the driver
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
- For Ubuntu 22.04+ https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction
This is an issue due to outdated gasket driver when being used with new linux kernels. Installing an updated driver from https://github.com/jnicolson/gasket-builder has been reported to fix the issue.
### Not detected on Raspberry Pi5
A kernel update to the RPi5 means an upate to config.txt is required, see [the raspberry pi forum for more info](https://forums.raspberrypi.com/viewtopic.php?t=363682&sid=cb59b026a412f0dc041595951273a9ca&start=25)
Specifically, add the following to config.txt
```
dtoverlay=pciex1-compat-pi5,no-mip
dtoverlay=pcie-32bit-dma-pi5
```
## Only One PCIe Coral Is Detected With Coral Dual EdgeTPU
Coral Dual EdgeTPU is one card with two identical TPU cores. Each core has it's own PCIe interface and motherboard needs to have two PCIe busses on the m.2 slot to make them both work.

View File

@ -17,6 +17,10 @@ ffmpeg:
record: preset-record-generic-audio-aac
```
### How can I get sound in live view?
Audio is only supported for live view when go2rtc is configured, see [the live docs](../configuration/live.md) for more information.
### I can't view recordings in the Web UI.
Ensure your cameras send h264 encoded video, or [transcode them](/configuration/restream.md).

View File

@ -33,9 +33,11 @@ const sidebars: SidebarsConfig = {
'configuration/object_detectors',
'configuration/audio_detectors',
],
'Semantic Search': [
Classifiers: [
'configuration/semantic_search',
'configuration/genai',
'configuration/face_recognition',
'configuration/license_plate_recognition',
],
Cameras: [
'configuration/cameras',
@ -82,6 +84,7 @@ const sidebars: SidebarsConfig = {
items: frigateHttpApiSidebar,
},
'integrations/mqtt',
'configuration/metrics',
'integrations/third_party_extensions',
],
'Frigate+': [

BIN
docs/static/img/ground-plane.jpg vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 231 KiB

View File

@ -3,12 +3,15 @@ import faulthandler
import signal
import sys
import threading
from typing import Union
import ruamel.yaml
from pydantic import ValidationError
from frigate.app import FrigateApp
from frigate.config import FrigateConfig
from frigate.log import setup_logging
from frigate.util.config import find_config_file
def main() -> None:
@ -42,10 +45,51 @@ def main() -> None:
print("*************************************************************")
print("*************************************************************")
print("*** Config Validation Errors ***")
print("*************************************************************")
print("*************************************************************\n")
# Attempt to get the original config file for line number tracking
config_path = find_config_file()
with open(config_path, "r") as f:
yaml_config = ruamel.yaml.YAML()
yaml_config.preserve_quotes = True
full_config = yaml_config.load(f)
for error in e.errors():
location = ".".join(str(item) for item in error["loc"])
print(f"{location}: {error['msg']}")
error_path = error["loc"]
current = full_config
line_number = "Unknown"
last_line_number = "Unknown"
try:
for i, part in enumerate(error_path):
key: Union[int, str] = (
int(part) if isinstance(part, str) and part.isdigit() else part
)
if isinstance(current, ruamel.yaml.comments.CommentedMap):
current = current[key]
elif isinstance(current, list):
if isinstance(key, int):
current = current[key]
if hasattr(current, "lc"):
last_line_number = current.lc.line
if i == len(error_path) - 1:
if hasattr(current, "lc"):
line_number = current.lc.line
else:
line_number = last_line_number
except Exception as traverse_error:
print(f"Could not determine exact line number: {traverse_error}")
if current != full_config:
print(f"Line # : {line_number}")
print(f"Key : {' -> '.join(map(str, error_path))}")
print(f"Value : {error.get('input', '-')}")
print(f"Message : {error.get('msg', error.get('type', 'Unknown'))}\n")
print("*************************************************************")
print("*** End Config Validation Errors ***")
print("*************************************************************")

View File

@ -1,5 +1,6 @@
"""Main api runner."""
import asyncio
import copy
import json
import logging
@ -7,15 +8,20 @@ import os
import traceback
from datetime import datetime, timedelta
from functools import reduce
from io import StringIO
from typing import Any, Optional
import aiofiles
import requests
import ruamel.yaml
from fastapi import APIRouter, Body, Path, Request, Response
from fastapi.encoders import jsonable_encoder
from fastapi.params import Depends
from fastapi.responses import JSONResponse, PlainTextResponse
from fastapi.responses import JSONResponse, PlainTextResponse, StreamingResponse
from markupsafe import escape
from peewee import operator
from prometheus_client import CONTENT_TYPE_LATEST, generate_latest
from pydantic import ValidationError
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
@ -31,6 +37,7 @@ from frigate.util.config import find_config_file
from frigate.util.services import (
ffprobe_stream,
get_nvidia_driver_info,
process_logs,
restart_frigate,
vainfo_hwaccel,
)
@ -105,6 +112,12 @@ def stats_history(request: Request, keys: str = None):
return JSONResponse(content=request.app.stats_emitter.get_stats_history(keys))
@router.get("/metrics")
def metrics():
"""Expose Prometheus metrics endpoint"""
return Response(content=generate_latest(), media_type=CONTENT_TYPE_LATEST)
@router.get("/config")
def config(request: Request):
config_obj: FrigateConfig = request.app.frigate_config
@ -154,6 +167,7 @@ def config(request: Request):
config["plus"] = {"enabled": request.app.frigate_config.plus_api.is_active()}
config["model"]["colormap"] = config_obj.model.colormap
config["model"]["all_attributes"] = config_obj.model.all_attributes
config["model"]["non_logo_attributes"] = config_obj.model.non_logo_attributes
# use merged labelamp
for detector_config in config["detectors"].values():
@ -186,7 +200,6 @@ def config_raw():
@router.post("/config/save")
def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
new_config = body.decode()
if not new_config:
return JSONResponse(
content=(
@ -197,13 +210,64 @@ def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
# Validate the config schema
try:
# Use ruamel to parse and preserve line numbers
yaml_config = ruamel.yaml.YAML()
yaml_config.preserve_quotes = True
full_config = yaml_config.load(StringIO(new_config))
FrigateConfig.parse_yaml(new_config)
except ValidationError as e:
error_message = []
for error in e.errors():
error_path = error["loc"]
current = full_config
line_number = "Unknown"
last_line_number = "Unknown"
try:
for i, part in enumerate(error_path):
key = int(part) if part.isdigit() else part
if isinstance(current, ruamel.yaml.comments.CommentedMap):
current = current[key]
elif isinstance(current, list):
current = current[key]
if hasattr(current, "lc"):
last_line_number = current.lc.line
if i == len(error_path) - 1:
if hasattr(current, "lc"):
line_number = current.lc.line
else:
line_number = last_line_number
except Exception:
line_number = "Unable to determine"
error_message.append(
f"Line {line_number}: {' -> '.join(map(str, error_path))} - {error.get('msg', error.get('type', 'Unknown'))}"
)
return JSONResponse(
content=(
{
"success": False,
"message": "Your configuration is invalid.\nSee the official documentation at docs.frigate.video.\n\n"
+ "\n".join(error_message),
}
),
status_code=400,
)
except Exception:
return JSONResponse(
content=(
{
"success": False,
"message": f"\nConfig Error:\n\n{escape(str(traceback.format_exc()))}",
"message": f"\nYour configuration is invalid.\nSee the official documentation at docs.frigate.video.\n\n{escape(str(traceback.format_exc()))}",
}
),
status_code=400,
@ -394,9 +458,10 @@ def nvinfo():
@router.get("/logs/{service}", tags=[Tags.logs])
def logs(
async def logs(
service: str = Path(enum=["frigate", "nginx", "go2rtc"]),
download: Optional[str] = None,
stream: Optional[bool] = False,
start: Optional[int] = 0,
end: Optional[int] = None,
):
@ -415,6 +480,27 @@ def logs(
status_code=500,
)
async def stream_logs(file_path: str):
"""Asynchronously stream log lines."""
buffer = ""
try:
async with aiofiles.open(file_path, "r") as file:
await file.seek(0, 2)
while True:
line = await file.readline()
if line:
buffer += line
# Process logs only when there are enough lines in the buffer
if "\n" in buffer:
_, processed_lines = process_logs(buffer, service)
buffer = ""
for processed_line in processed_lines:
yield f"{processed_line}\n"
else:
await asyncio.sleep(0.1)
except FileNotFoundError:
yield "Log file not found.\n"
log_locations = {
"frigate": "/dev/shm/logs/frigate/current",
"go2rtc": "/dev/shm/logs/go2rtc/current",
@ -431,48 +517,17 @@ def logs(
if download:
return download_logs(service_location)
if stream:
return StreamingResponse(stream_logs(service_location), media_type="text/plain")
# For full logs initially
try:
file = open(service_location, "r")
contents = file.read()
file.close()
# use the start timestamp to group logs together``
logLines = []
keyLength = 0
dateEnd = 0
currentKey = ""
currentLine = ""
for rawLine in contents.splitlines():
cleanLine = rawLine.strip()
if len(cleanLine) < 10:
continue
# handle cases where S6 does not include date in log line
if " " not in cleanLine:
cleanLine = f"{datetime.now()} {cleanLine}"
if dateEnd == 0:
dateEnd = cleanLine.index(" ")
keyLength = dateEnd - (6 if service_location == "frigate" else 0)
newKey = cleanLine[0:keyLength]
if newKey == currentKey:
currentLine += f"\n{cleanLine[dateEnd:].strip()}"
continue
else:
if len(currentLine) > 0:
logLines.append(currentLine)
currentKey = newKey
currentLine = cleanLine
logLines.append(currentLine)
async with aiofiles.open(service_location, "r") as file:
contents = await file.read()
total_lines, log_lines = process_logs(contents, service, start, end)
return JSONResponse(
content={"totalLines": len(logLines), "lines": logLines[start:end]},
content={"totalLines": total_lines, "lines": log_lines},
status_code=200,
)
except FileNotFoundError as e:

View File

@ -0,0 +1,178 @@
"""Object classification APIs."""
import logging
import os
import random
import shutil
import string
from fastapi import APIRouter, Request, UploadFile
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filename
from frigate.api.defs.tags import Tags
from frigate.const import FACE_DIR
from frigate.embeddings import EmbeddingsContext
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.events])
@router.get("/faces")
def get_faces():
face_dict: dict[str, list[str]] = {}
for name in os.listdir(FACE_DIR):
face_dir = os.path.join(FACE_DIR, name)
if not os.path.isdir(face_dir):
continue
face_dict[name] = []
for file in sorted(
os.listdir(face_dir),
key=lambda f: os.path.getctime(os.path.join(face_dir, f)),
reverse=True,
):
face_dict[name].append(file)
return JSONResponse(status_code=200, content=face_dict)
@router.post("/faces/reprocess")
def reclassify_face(request: Request, body: dict = None):
if not request.app.frigate_config.face_recognition.enabled:
return JSONResponse(
status_code=400,
content={"message": "Face recognition is not enabled.", "success": False},
)
json: dict[str, any] = body or {}
training_file = os.path.join(
FACE_DIR, f"train/{sanitize_filename(json.get('training_file', ''))}"
)
if not training_file or not os.path.isfile(training_file):
return JSONResponse(
content=(
{
"success": False,
"message": f"Invalid filename or no file exists: {training_file}",
}
),
status_code=404,
)
context: EmbeddingsContext = request.app.embeddings
response = context.reprocess_face(training_file)
return JSONResponse(
content=response,
status_code=200,
)
@router.post("/faces/train/{name}/classify")
def train_face(request: Request, name: str, body: dict = None):
if not request.app.frigate_config.face_recognition.enabled:
return JSONResponse(
status_code=400,
content={"message": "Face recognition is not enabled.", "success": False},
)
json: dict[str, any] = body or {}
training_file = os.path.join(
FACE_DIR, f"train/{sanitize_filename(json.get('training_file', ''))}"
)
if not training_file or not os.path.isfile(training_file):
return JSONResponse(
content=(
{
"success": False,
"message": f"Invalid filename or no file exists: {training_file}",
}
),
status_code=404,
)
sanitized_name = sanitize_filename(name)
rand_id = "".join(random.choices(string.ascii_lowercase + string.digits, k=6))
new_name = f"{sanitized_name}-{rand_id}.webp"
new_file = os.path.join(FACE_DIR, f"{sanitized_name}/{new_name}")
shutil.move(training_file, new_file)
context: EmbeddingsContext = request.app.embeddings
context.clear_face_classifier()
return JSONResponse(
content=(
{
"success": True,
"message": f"Successfully saved {training_file} as {new_name}.",
}
),
status_code=200,
)
@router.post("/faces/{name}/create")
async def create_face(request: Request, name: str):
if not request.app.frigate_config.face_recognition.enabled:
return JSONResponse(
status_code=400,
content={"message": "Face recognition is not enabled.", "success": False},
)
os.makedirs(
os.path.join(FACE_DIR, sanitize_filename(name.replace(" ", "_"))), exist_ok=True
)
return JSONResponse(
status_code=200,
content={"success": False, "message": "Successfully created face folder."},
)
@router.post("/faces/{name}/register")
async def register_face(request: Request, name: str, file: UploadFile):
if not request.app.frigate_config.face_recognition.enabled:
return JSONResponse(
status_code=400,
content={"message": "Face recognition is not enabled.", "success": False},
)
context: EmbeddingsContext = request.app.embeddings
result = context.register_face(name, await file.read())
return JSONResponse(
status_code=200 if result.get("success", True) else 400,
content=result,
)
@router.post("/faces/{name}/delete")
def deregister_faces(request: Request, name: str, body: dict = None):
if not request.app.frigate_config.face_recognition.enabled:
return JSONResponse(
status_code=400,
content={"message": "Face recognition is not enabled.", "success": False},
)
json: dict[str, any] = body or {}
list_of_ids = json.get("ids", "")
if not list_of_ids or len(list_of_ids) == 0:
return JSONResponse(
content=({"success": False, "message": "Not a valid list of ids"}),
status_code=404,
)
context: EmbeddingsContext = request.app.embeddings
context.delete_face_ids(
name, map(lambda file: sanitize_filename(file), list_of_ids)
)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)

View File

@ -25,6 +25,8 @@ class EventsQueryParams(BaseModel):
favorites: Optional[int] = None
min_score: Optional[float] = None
max_score: Optional[float] = None
min_speed: Optional[float] = None
max_speed: Optional[float] = None
is_submitted: Optional[int] = None
min_length: Optional[float] = None
max_length: Optional[float] = None
@ -51,6 +53,8 @@ class EventsSearchQueryParams(BaseModel):
timezone: Optional[str] = "utc"
min_score: Optional[float] = None
max_score: Optional[float] = None
min_speed: Optional[float] = None
max_speed: Optional[float] = None
sort: Optional[str] = None

View File

@ -20,6 +20,7 @@ class MediaLatestFrameQueryParams(BaseModel):
regions: Optional[int] = None
quality: Optional[int] = 70
height: Optional[int] = None
store: Optional[int] = None
class MediaEventsSnapshotQueryParams(BaseModel):
@ -40,3 +41,8 @@ class MediaMjpegFeedQueryParams(BaseModel):
mask: Optional[int] = None
motion: Optional[int] = None
regions: Optional[int] = None
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"

View File

@ -8,6 +8,9 @@ class EventsSubLabelBody(BaseModel):
subLabelScore: Optional[float] = Field(
title="Score for sub label", default=None, gt=0.0, le=1.0
)
camera: Optional[str] = Field(
title="Camera this object is detected on.", default=None
)
class EventsDescriptionBody(BaseModel):

View File

@ -0,0 +1,5 @@
from pydantic import BaseModel, Field
class ExportRenameBody(BaseModel):
name: str = Field(title="Friendly name", max_length=256)

View File

@ -10,4 +10,5 @@ class Tags(Enum):
review = "Review"
export = "Export"
events = "Events"
classification = "classification"
auth = "Auth"

View File

@ -92,6 +92,8 @@ def events(params: EventsQueryParams = Depends()):
favorites = params.favorites
min_score = params.min_score
max_score = params.max_score
min_speed = params.min_speed
max_speed = params.max_speed
is_submitted = params.is_submitted
min_length = params.min_length
max_length = params.max_length
@ -226,6 +228,12 @@ def events(params: EventsQueryParams = Depends()):
if min_score is not None:
clauses.append((Event.data["score"] >= min_score))
if max_speed is not None:
clauses.append((Event.data["average_estimated_speed"] <= max_speed))
if min_speed is not None:
clauses.append((Event.data["average_estimated_speed"] >= min_speed))
if min_length is not None:
clauses.append(((Event.end_time - Event.start_time) >= min_length))
@ -249,6 +257,10 @@ def events(params: EventsQueryParams = Depends()):
order_by = Event.data["score"].asc()
elif sort == "score_desc":
order_by = Event.data["score"].desc()
elif sort == "speed_asc":
order_by = Event.data["average_estimated_speed"].asc()
elif sort == "speed_desc":
order_by = Event.data["average_estimated_speed"].desc()
elif sort == "date_asc":
order_by = Event.start_time.asc()
elif sort == "date_desc":
@ -316,7 +328,15 @@ def events_explore(limit: int = 10):
k: v
for k, v in event.data.items()
if k
in ["type", "score", "top_score", "description", "sub_label_score"]
in [
"type",
"score",
"top_score",
"description",
"sub_label_score",
"average_estimated_speed",
"velocity_angle",
]
},
"event_count": label_counts[event.label],
}
@ -367,6 +387,8 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
before = params.before
min_score = params.min_score
max_score = params.max_score
min_speed = params.min_speed
max_speed = params.max_speed
time_range = params.time_range
has_clip = params.has_clip
has_snapshot = params.has_snapshot
@ -466,6 +488,16 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
if max_score is not None:
event_filters.append((Event.data["score"] <= max_score))
if min_speed is not None and max_speed is not None:
event_filters.append(
(Event.data["average_estimated_speed"].between(min_speed, max_speed))
)
else:
if min_speed is not None:
event_filters.append((Event.data["average_estimated_speed"] >= min_speed))
if max_speed is not None:
event_filters.append((Event.data["average_estimated_speed"] <= max_speed))
if time_range != DEFAULT_TIME_RANGE:
tz_name = params.timezone
hour_modifier, minute_modifier, _ = get_tz_modifiers(tz_name)
@ -581,7 +613,16 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
processed_event["data"] = {
k: v
for k, v in event["data"].items()
if k in ["type", "score", "top_score", "description"]
if k
in [
"type",
"score",
"top_score",
"description",
"sub_label_score",
"average_estimated_speed",
"velocity_angle",
]
}
if event["id"] in search_results:
@ -596,6 +637,10 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
processed_events.sort(key=lambda x: x["score"])
elif min_score is not None and max_score is not None and sort == "score_desc":
processed_events.sort(key=lambda x: x["score"], reverse=True)
elif min_speed is not None and max_speed is not None and sort == "speed_asc":
processed_events.sort(key=lambda x: x["average_estimated_speed"])
elif min_speed is not None and max_speed is not None and sort == "speed_desc":
processed_events.sort(key=lambda x: x["average_estimated_speed"], reverse=True)
elif sort == "date_asc":
processed_events.sort(key=lambda x: x["start_time"])
else:
@ -909,38 +954,59 @@ def set_sub_label(
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
if not body.camera:
return JSONResponse(
content=(
{
"success": False,
"message": "Event "
+ event_id
+ " not found and camera is not provided.",
}
),
status_code=404,
)
event = None
if request.app.detected_frames_processor:
tracked_obj: TrackedObject = (
request.app.detected_frames_processor.camera_states[
event.camera if event else body.camera
].tracked_objects.get(event_id)
)
else:
tracked_obj = None
if not event and not tracked_obj:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
content=(
{"success": False, "message": "Event " + event_id + " not found."}
),
status_code=404,
)
new_sub_label = body.subLabel
new_score = body.subLabelScore
if not event.end_time:
# update tracked object
tracked_obj: TrackedObject = (
request.app.detected_frames_processor.camera_states[
event.camera
].tracked_objects.get(event.id)
)
if tracked_obj:
tracked_obj.obj_data["sub_label"] = (new_sub_label, new_score)
if tracked_obj:
tracked_obj.obj_data["sub_label"] = (new_sub_label, new_score)
# update timeline items
Timeline.update(
data=Timeline.data.update({"sub_label": (new_sub_label, new_score)})
).where(Timeline.source_id == event_id).execute()
event.sub_label = new_sub_label
if event:
event.sub_label = new_sub_label
if new_score:
data = event.data
data["sub_label_score"] = new_score
event.data = data
if new_score:
data = event.data
data["sub_label_score"] = new_score
event.data = data
event.save()
event.save()
return JSONResponse(
content=(
{

View File

@ -12,6 +12,7 @@ from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.request.export_rename_body import ExportRenameBody
from frigate.api.defs.tags import Tags
from frigate.const import EXPORT_DIR
from frigate.models import Export, Previews, Recordings
@ -129,8 +130,8 @@ def export_recording(
)
@router.patch("/export/{event_id}/{new_name}")
def export_rename(event_id: str, new_name: str):
@router.patch("/export/{event_id}/rename")
def export_rename(event_id: str, body: ExportRenameBody):
try:
export: Export = Export.get(Export.id == event_id)
except DoesNotExist:
@ -144,7 +145,7 @@ def export_rename(event_id: str, new_name: str):
status_code=404,
)
export.name = new_name
export.name = body.name
export.save()
return JSONResponse(
content=(

View File

@ -11,7 +11,16 @@ from starlette_context import middleware, plugins
from starlette_context.plugins import Plugin
from frigate.api import app as main_app
from frigate.api import auth, event, export, media, notification, preview, review
from frigate.api import (
auth,
classification,
event,
export,
media,
notification,
preview,
review,
)
from frigate.api.auth import get_jwt_secret, limiter
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
@ -103,6 +112,7 @@ def create_fastapi_app(
# Routes
# Order of include_router matters: https://fastapi.tiangolo.com/tutorial/path-params/#order-matters
app.include_router(auth.router)
app.include_router(classification.router)
app.include_router(review.router)
app.include_router(main_app.router)
app.include_router(preview.router)

View File

@ -25,6 +25,7 @@ from frigate.api.defs.query.media_query_parameters import (
MediaEventsSnapshotQueryParams,
MediaLatestFrameQueryParams,
MediaMjpegFeedQueryParams,
MediaRecordingsSummaryQueryParams,
)
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
@ -182,11 +183,16 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, img = cv2.imencode(f".{extension}", frame, quality_params)
_, img = cv2.imencode(f".{extension}", frame, quality_params)
return Response(
content=img.tobytes(),
media_type=f"image/{mime_type}",
headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
headers={
"Content-Type": f"image/{mime_type}",
"Cache-Control": "no-store"
if not params.store
else "private, max-age=60",
},
)
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
frame = cv2.cvtColor(
@ -199,11 +205,16 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, img = cv2.imencode(f".{extension}", frame, quality_params)
_, img = cv2.imencode(f".{extension}", frame, quality_params)
return Response(
content=img.tobytes(),
media_type=f"image/{mime_type}",
headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
headers={
"Content-Type": f"image/{mime_type}",
"Cache-Control": "no-store"
if not params.store
else "private, max-age=60",
},
)
else:
return JSONResponse(
@ -362,6 +373,48 @@ def get_recordings_storage_usage(request: Request):
return JSONResponse(content=camera_usages)
@router.get("/recordings/summary")
def all_recordings_summary(params: MediaRecordingsSummaryQueryParams = Depends()):
"""Returns true/false by day indicating if recordings exist"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
cameras = params.cameras
query = (
Recordings.select(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time + seconds_offset,
"unixepoch",
hour_modifier,
minute_modifier,
),
).alias("day")
)
.group_by(
fn.strftime(
"%Y-%m-%d",
fn.datetime(
Recordings.start_time + seconds_offset,
"unixepoch",
hour_modifier,
minute_modifier,
),
)
)
.order_by(Recordings.start_time.desc())
)
if cameras != "all":
query = query.where(Recordings.camera << cameras.split(","))
recording_days = query.namedtuples()
days = {day.day: True for day in recording_days}
return JSONResponse(content=days)
@router.get("/{camera_name}/recordings/summary")
def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
@ -1035,30 +1088,8 @@ def event_clip(request: Request, event_id: str):
content={"success": False, "message": "Clip not available"}, status_code=404
)
file_name = f"{event.camera}-{event.id}.mp4"
clip_path = os.path.join(CLIPS_DIR, file_name)
if not os.path.isfile(clip_path):
end_ts = (
datetime.now().timestamp() if event.end_time is None else event.end_time
)
return recording_clip(request, event.camera, event.start_time, end_ts)
headers = {
"Content-Description": "File Transfer",
"Cache-Control": "no-cache",
"Content-Type": "video/mp4",
"Content-Length": str(os.path.getsize(clip_path)),
# nginx: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers
"X-Accel-Redirect": f"/clips/{file_name}",
}
return FileResponse(
clip_path,
media_type="video/mp4",
filename=file_name,
headers=headers,
)
end_ts = datetime.now().timestamp() if event.end_time is None else event.end_time
return recording_clip(request, event.camera, event.start_time, end_ts)
@router.get("/events/{event_id}/preview.gif")

View File

@ -110,6 +110,28 @@ def review(params: ReviewQueryParams = Depends()):
return JSONResponse(content=[r for r in review])
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
def review_ids(ids: str):
ids = ids.split(",")
if not ids:
return JSONResponse(
content=({"success": False, "message": "Valid list of ids must be sent"}),
status_code=400,
)
try:
reviews = (
ReviewSegment.select().where(ReviewSegment.id << ids).dicts().iterator()
)
return JSONResponse(list(reviews))
except Exception:
return JSONResponse(
content=({"success": False, "message": "Review segments not found"}),
status_code=400,
)
@router.get("/review/summary", response_model=ReviewSummaryResponse)
def review_summary(params: ReviewSummaryQueryParams = Depends()):
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
@ -490,8 +512,6 @@ def set_not_reviewed(review_id: str):
review.save()
return JSONResponse(
content=(
{"success": True, "message": "Set Review " + review_id + " as not viewed"}
),
content=({"success": True, "message": f"Set Review {review_id} as not viewed"}),
status_code=200,
)

View File

@ -17,8 +17,9 @@ import frigate.util as util
from frigate.api.auth import hash_password
from frigate.api.fastapi_app import create_fastapi_app
from frigate.camera import CameraMetrics, PTZMetrics
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.dispatcher import Communicator, Dispatcher
from frigate.comms.dispatcher import Dispatcher
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
EventMetadataTypeEnum,
@ -34,10 +35,12 @@ from frigate.const import (
CLIPS_DIR,
CONFIG_DIR,
EXPORT_DIR,
FACE_DIR,
MODEL_CACHE_DIR,
RECORD_DIR,
SHM_FRAMES_VAR,
)
from frigate.data_processing.types import DataProcessorMetrics
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.embeddings import EmbeddingsContext, manage_embeddings
from frigate.events.audio import AudioProcessor
@ -88,6 +91,9 @@ class FrigateApp:
self.detection_shms: list[mp.shared_memory.SharedMemory] = []
self.log_queue: Queue = mp.Queue()
self.camera_metrics: dict[str, CameraMetrics] = {}
self.embeddings_metrics: DataProcessorMetrics | None = (
DataProcessorMetrics() if config.semantic_search.enabled else None
)
self.ptz_metrics: dict[str, PTZMetrics] = {}
self.processes: dict[str, int] = {}
self.embeddings: Optional[EmbeddingsContext] = None
@ -96,14 +102,19 @@ class FrigateApp:
self.config = config
def ensure_dirs(self) -> None:
for d in [
dirs = [
CONFIG_DIR,
RECORD_DIR,
f"{CLIPS_DIR}/cache",
CACHE_DIR,
MODEL_CACHE_DIR,
EXPORT_DIR,
]:
]
if self.config.face_recognition.enabled:
dirs.append(FACE_DIR)
for d in dirs:
if not os.path.exists(d) and not os.path.islink(d):
logger.info(f"Creating directory: {d}")
os.makedirs(d)
@ -229,7 +240,10 @@ class FrigateApp:
embedding_process = util.Process(
target=manage_embeddings,
name="embeddings_manager",
args=(self.config,),
args=(
self.config,
self.embeddings_metrics,
),
)
embedding_process.daemon = True
self.embedding_process = embedding_process
@ -301,8 +315,14 @@ class FrigateApp:
if self.config.mqtt.enabled:
comms.append(MqttClient(self.config))
if self.config.notifications.enabled_in_config:
comms.append(WebPushClient(self.config))
notification_cameras = [
c
for c in self.config.cameras.values()
if c.enabled and c.notifications.enabled_in_config
]
if notification_cameras:
comms.append(WebPushClient(self.config, self.stop_event))
comms.append(WebSocketClient(self.config))
comms.append(self.inter_process_communicator)
@ -491,7 +511,11 @@ class FrigateApp:
self.stats_emitter = StatsEmitter(
self.config,
stats_init(
self.config, self.camera_metrics, self.detectors, self.processes
self.config,
self.camera_metrics,
self.embeddings_metrics,
self.detectors,
self.processes,
),
self.stop_event,
)

View File

@ -0,0 +1,130 @@
"""Manage camera activity and updating listeners."""
from collections import Counter
from typing import Callable
from frigate.config.config import FrigateConfig
class CameraActivityManager:
def __init__(
self, config: FrigateConfig, publish: Callable[[str, any], None]
) -> None:
self.config = config
self.publish = publish
self.last_camera_activity: dict[str, dict[str, any]] = {}
self.camera_all_object_counts: dict[str, Counter] = {}
self.camera_active_object_counts: dict[str, Counter] = {}
self.zone_all_object_counts: dict[str, Counter] = {}
self.zone_active_object_counts: dict[str, Counter] = {}
self.all_zone_labels: dict[str, set[str]] = {}
for camera_config in config.cameras.values():
if not camera_config.enabled:
continue
self.last_camera_activity[camera_config.name] = {}
self.camera_all_object_counts[camera_config.name] = Counter()
self.camera_active_object_counts[camera_config.name] = Counter()
for zone, zone_config in camera_config.zones.items():
if zone not in self.all_zone_labels:
self.zone_all_object_counts[zone] = Counter()
self.zone_active_object_counts[zone] = Counter()
self.all_zone_labels[zone] = set()
self.all_zone_labels[zone].update(zone_config.objects)
def update_activity(self, new_activity: dict[str, dict[str, any]]) -> None:
all_objects: list[dict[str, any]] = []
for camera in new_activity.keys():
new_objects = new_activity[camera].get("objects", [])
all_objects.extend(new_objects)
if self.last_camera_activity.get(camera, {}).get("objects") != new_objects:
self.compare_camera_activity(camera, new_objects)
# run through every zone, getting a count of objects in that zone right now
for zone, labels in self.all_zone_labels.items():
all_zone_objects = Counter(
obj["label"].replace("-verified", "")
for obj in all_objects
if zone in obj["current_zones"]
)
active_zone_objects = Counter(
obj["label"].replace("-verified", "")
for obj in all_objects
if zone in obj["current_zones"] and not obj["stationary"]
)
any_changed = False
# run through each object and check what topics need to be updated for this zone
for label in labels:
new_count = all_zone_objects[label]
new_active_count = active_zone_objects[label]
if (
new_count != self.zone_all_object_counts[zone][label]
or label not in self.zone_all_object_counts[zone]
):
any_changed = True
self.publish(f"{zone}/{label}", new_count)
self.zone_all_object_counts[zone][label] = new_count
if (
new_active_count != self.zone_active_object_counts[zone][label]
or label not in self.zone_active_object_counts[zone]
):
any_changed = True
self.publish(f"{zone}/{label}/active", new_active_count)
self.zone_active_object_counts[zone][label] = new_active_count
if any_changed:
self.publish(f"{zone}/all", sum(list(all_zone_objects.values())))
self.publish(
f"{zone}/all/active", sum(list(active_zone_objects.values()))
)
self.last_camera_activity = new_activity
def compare_camera_activity(
self, camera: str, new_activity: dict[str, any]
) -> None:
all_objects = Counter(
obj["label"].replace("-verified", "") for obj in new_activity
)
active_objects = Counter(
obj["label"].replace("-verified", "")
for obj in new_activity
if not obj["stationary"]
)
any_changed = False
# run through each object and check what topics need to be updated
for label in self.config.cameras[camera].objects.track:
if label in self.config.model.non_logo_attributes:
continue
new_count = all_objects[label]
new_active_count = active_objects[label]
if (
new_count != self.camera_all_object_counts[camera][label]
or label not in self.camera_all_object_counts[camera]
):
any_changed = True
self.publish(f"{camera}/{label}", new_count)
self.camera_all_object_counts[camera][label] = new_count
if (
new_active_count != self.camera_active_object_counts[camera][label]
or label not in self.camera_active_object_counts[camera]
):
any_changed = True
self.publish(f"{camera}/{label}/active", new_active_count)
self.camera_active_object_counts[camera][label] = new_active_count
if any_changed:
self.publish(f"{camera}/all", sum(list(all_objects.values())))
self.publish(f"{camera}/all/active", sum(list(active_objects.values())))

View File

@ -0,0 +1,21 @@
from abc import ABC, abstractmethod
from typing import Any, Callable
class Communicator(ABC):
"""pub/sub model via specific protocol."""
@abstractmethod
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Send data via specific protocol."""
pass
@abstractmethod
def subscribe(self, receiver: Callable) -> None:
"""Pass receiver so communicators can pass commands."""
pass
@abstractmethod
def stop(self) -> None:
"""Stop the communicator."""
pass

View File

@ -3,16 +3,19 @@
import datetime
import json
import logging
from abc import ABC, abstractmethod
from typing import Any, Callable, Optional
from frigate.camera import PTZMetrics
from frigate.camera.activity_manager import CameraActivityManager
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.webpush import WebPushClient
from frigate.config import BirdseyeModeEnum, FrigateConfig
from frigate.const import (
CLEAR_ONGOING_REVIEW_SEGMENTS,
INSERT_MANY_RECORDINGS,
INSERT_PREVIEW,
NOTIFICATION_TEST,
REQUEST_REGION_GRID,
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
@ -29,25 +32,6 @@ from frigate.util.services import restart_frigate
logger = logging.getLogger(__name__)
class Communicator(ABC):
"""pub/sub model via specific protocol."""
@abstractmethod
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Send data via specific protocol."""
pass
@abstractmethod
def subscribe(self, receiver: Callable) -> None:
"""Pass receiver so communicators can pass commands."""
pass
@abstractmethod
def stop(self) -> None:
"""Stop the communicator."""
pass
class Dispatcher:
"""Handle communication between Frigate and communicators."""
@ -64,7 +48,7 @@ class Dispatcher:
self.onvif = onvif
self.ptz_metrics = ptz_metrics
self.comms = communicators
self.camera_activity = {}
self.camera_activity = CameraActivityManager(config, self.publish)
self.model_state = {}
self.embeddings_reindex = {}
@ -76,18 +60,25 @@ class Dispatcher:
"motion": self._on_motion_command,
"motion_contour_area": self._on_motion_contour_area_command,
"motion_threshold": self._on_motion_threshold_command,
"notifications": self._on_camera_notification_command,
"recordings": self._on_recordings_command,
"snapshots": self._on_snapshots_command,
"birdseye": self._on_birdseye_command,
"birdseye_mode": self._on_birdseye_mode_command,
"review_alerts": self._on_alerts_command,
"review_detections": self._on_detections_command,
}
self._global_settings_handlers: dict[str, Callable] = {
"notifications": self._on_notification_command,
"notifications": self._on_global_notification_command,
}
for comm in self.comms:
comm.subscribe(self._receive)
self.web_push_client = next(
(comm for comm in communicators if isinstance(comm, WebPushClient)), None
)
def _receive(self, topic: str, payload: str) -> Optional[Any]:
"""Handle receiving of payload from communicators."""
@ -130,7 +121,7 @@ class Dispatcher:
).execute()
def handle_update_camera_activity():
self.camera_activity = payload
self.camera_activity.update_activity(payload)
def handle_update_event_description():
event: Event = Event.get(Event.id == payload["id"])
@ -171,7 +162,7 @@ class Dispatcher:
)
def handle_on_connect():
camera_status = self.camera_activity.copy()
camera_status = self.camera_activity.last_camera_activity.copy()
for camera in camera_status.keys():
camera_status[camera]["config"] = {
@ -179,9 +170,18 @@ class Dispatcher:
"snapshots": self.config.cameras[camera].snapshots.enabled,
"record": self.config.cameras[camera].record.enabled,
"audio": self.config.cameras[camera].audio.enabled,
"notifications": self.config.cameras[camera].notifications.enabled,
"notifications_suspended": int(
self.web_push_client.suspended_cameras.get(camera, 0)
)
if self.web_push_client
and camera in self.web_push_client.suspended_cameras
else 0,
"autotracking": self.config.cameras[
camera
].onvif.autotracking.enabled,
"alerts": self.config.cameras[camera].review.alerts.enabled,
"detections": self.config.cameras[camera].review.detections.enabled,
}
self.publish("camera_activity", json.dumps(camera_status))
@ -191,6 +191,9 @@ class Dispatcher:
json.dumps(self.embeddings_reindex.copy()),
)
def handle_notification_test():
self.publish("notification_test", "Test notification")
# Dictionary mapping topic to handlers
topic_handlers = {
INSERT_MANY_RECORDINGS: handle_insert_many_recordings,
@ -202,13 +205,14 @@ class Dispatcher:
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
UPDATE_MODEL_STATE: handle_update_model_state,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
NOTIFICATION_TEST: handle_notification_test,
"restart": handle_restart,
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
"modelState": handle_model_state,
"onConnect": handle_on_connect,
}
if topic.endswith("set") or topic.endswith("ptz"):
if topic.endswith("set") or topic.endswith("ptz") or topic.endswith("suspend"):
try:
parts = topic.split("/")
if len(parts) == 3 and topic.endswith("set"):
@ -223,6 +227,11 @@ class Dispatcher:
# example /cam_name/ptz payload=MOVE_UP|MOVE_DOWN|STOP...
camera_name = parts[-2]
handle_camera_command("ptz", camera_name, "", payload)
elif len(parts) == 3 and topic.endswith("suspend"):
# example /cam_name/notifications/suspend payload=duration
camera_name = parts[-3]
command = parts[-2]
self._on_camera_notification_suspend(camera_name, payload)
except IndexError:
logger.error(
f"Received invalid {topic.split('/')[-1]} command: {topic}"
@ -364,16 +373,18 @@ class Dispatcher:
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.publish(f"{camera_name}/motion_threshold/state", payload, retain=True)
def _on_notification_command(self, payload: str) -> None:
"""Callback for notification topic."""
def _on_global_notification_command(self, payload: str) -> None:
"""Callback for global notification topic."""
if payload != "ON" and payload != "OFF":
f"Received unsupported value for notification: {payload}"
f"Received unsupported value for all notification: {payload}"
return
notification_settings = self.config.notifications
logger.info(f"Setting notifications: {payload}")
logger.info(f"Setting all notifications: {payload}")
notification_settings.enabled = payload == "ON" # type: ignore[union-attr]
self.config_updater.publish("config/notifications", notification_settings)
self.config_updater.publish(
"config/notifications", {"_global_notifications": notification_settings}
)
self.publish("notifications/state", payload, retain=True)
def _on_audio_command(self, camera_name: str, payload: str) -> None:
@ -490,3 +501,115 @@ class Dispatcher:
self.config_updater.publish(f"config/birdseye/{camera_name}", birdseye_settings)
self.publish(f"{camera_name}/birdseye_mode/state", payload, retain=True)
def _on_camera_notification_command(self, camera_name: str, payload: str) -> None:
"""Callback for camera level notifications topic."""
notification_settings = self.config.cameras[camera_name].notifications
if payload == "ON":
if not self.config.cameras[camera_name].notifications.enabled_in_config:
logger.error(
"Notifications must be enabled in the config to be turned on via MQTT."
)
return
if not notification_settings.enabled:
logger.info(f"Turning on notifications for {camera_name}")
notification_settings.enabled = True
if (
self.web_push_client
and camera_name in self.web_push_client.suspended_cameras
):
self.web_push_client.suspended_cameras[camera_name] = 0
elif payload == "OFF":
if notification_settings.enabled:
logger.info(f"Turning off notifications for {camera_name}")
notification_settings.enabled = False
if (
self.web_push_client
and camera_name in self.web_push_client.suspended_cameras
):
self.web_push_client.suspended_cameras[camera_name] = 0
self.config_updater.publish(
"config/notifications", {camera_name: notification_settings}
)
self.publish(f"{camera_name}/notifications/state", payload, retain=True)
self.publish(f"{camera_name}/notifications/suspended", "0", retain=True)
def _on_camera_notification_suspend(self, camera_name: str, payload: str) -> None:
"""Callback for camera level notifications suspend topic."""
try:
duration = int(payload)
except ValueError:
logger.error(f"Invalid suspension duration: {payload}")
return
if self.web_push_client is None:
logger.error("WebPushClient not available for suspension")
return
notification_settings = self.config.cameras[camera_name].notifications
if not notification_settings.enabled:
logger.error(f"Notifications are not enabled for {camera_name}")
return
if duration != 0:
self.web_push_client.suspend_notifications(camera_name, duration)
else:
self.web_push_client.unsuspend_notifications(camera_name)
self.publish(
f"{camera_name}/notifications/suspended",
str(
int(self.web_push_client.suspended_cameras.get(camera_name, 0))
if camera_name in self.web_push_client.suspended_cameras
else 0
),
retain=True,
)
def _on_alerts_command(self, camera_name: str, payload: str) -> None:
"""Callback for alerts topic."""
review_settings = self.config.cameras[camera_name].review
if payload == "ON":
if not self.config.cameras[camera_name].review.alerts.enabled_in_config:
logger.error(
"Alerts must be enabled in the config to be turned on via MQTT."
)
return
if not review_settings.alerts.enabled:
logger.info(f"Turning on alerts for {camera_name}")
review_settings.alerts.enabled = True
elif payload == "OFF":
if review_settings.alerts.enabled:
logger.info(f"Turning off alerts for {camera_name}")
review_settings.alerts.enabled = False
self.config_updater.publish(f"config/review/{camera_name}", review_settings)
self.publish(f"{camera_name}/review_alerts/state", payload, retain=True)
def _on_detections_command(self, camera_name: str, payload: str) -> None:
"""Callback for detections topic."""
review_settings = self.config.cameras[camera_name].review
if payload == "ON":
if not self.config.cameras[camera_name].review.detections.enabled_in_config:
logger.error(
"Detections must be enabled in the config to be turned on via MQTT."
)
return
if not review_settings.detections.enabled:
logger.info(f"Turning on detections for {camera_name}")
review_settings.detections.enabled = True
elif payload == "OFF":
if review_settings.detections.enabled:
logger.info(f"Turning off detections for {camera_name}")
review_settings.detections.enabled = False
self.config_updater.publish(f"config/review/{camera_name}", review_settings)
self.publish(f"{camera_name}/review_detections/state", payload, retain=True)

View File

@ -9,9 +9,12 @@ SOCKET_REP_REQ = "ipc:///tmp/cache/embeddings"
class EmbeddingsRequestEnum(Enum):
clear_face_classifier = "clear_face_classifier"
embed_description = "embed_description"
embed_thumbnail = "embed_thumbnail"
generate_search = "generate_search"
register_face = "register_face"
reprocess_face = "reprocess_face"
class EmbeddingsResponder:
@ -22,7 +25,7 @@ class EmbeddingsResponder:
def check_for_request(self, process: Callable) -> None:
while True: # load all messages that are queued
has_message, _, _ = zmq.select([self.socket], [], [], 0.1)
has_message, _, _ = zmq.select([self.socket], [], [], 0.01)
if not has_message:
break

View File

@ -7,7 +7,7 @@ from typing import Callable
import zmq
from frigate.comms.dispatcher import Communicator
from frigate.comms.base_communicator import Communicator
SOCKET_REP_REQ = "ipc:///tmp/cache/comms"

View File

@ -5,7 +5,7 @@ from typing import Any, Callable
import paho.mqtt.client as mqtt
from paho.mqtt.enums import CallbackAPIVersion
from frigate.comms.dispatcher import Communicator
from frigate.comms.base_communicator import Communicator
from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)
@ -31,7 +31,10 @@ class MqttClient(Communicator): # type: ignore[misc]
return
self.client.publish(
f"{self.mqtt_config.topic_prefix}/{topic}", payload, retain=retain
f"{self.mqtt_config.topic_prefix}/{topic}",
payload,
qos=self.config.mqtt.qos,
retain=retain,
)
def stop(self) -> None:
@ -104,6 +107,16 @@ class MqttClient(Communicator): # type: ignore[misc]
),
retain=True,
)
self.publish(
f"{camera_name}/review_alerts/state",
"ON" if camera.review.alerts.enabled_in_config else "OFF",
retain=True,
)
self.publish(
f"{camera_name}/review_detections/state",
"ON" if camera.review.detections.enabled_in_config else "OFF",
retain=True,
)
if self.config.notifications.enabled_in_config:
self.publish(
@ -151,7 +164,7 @@ class MqttClient(Communicator): # type: ignore[misc]
self.connected = True
logger.debug("MQTT connected")
client.subscribe(f"{self.mqtt_config.topic_prefix}/#")
client.subscribe(f"{self.mqtt_config.topic_prefix}/#", qos=self.config.mqtt.qos)
self._set_initial_topics()
def _on_disconnect(

View File

@ -4,13 +4,17 @@ import datetime
import json
import logging
import os
import queue
import threading
from dataclasses import dataclass
from multiprocessing.synchronize import Event as MpEvent
from typing import Any, Callable
from py_vapid import Vapid01
from pywebpush import WebPusher
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigSubscriber
from frigate.comms.dispatcher import Communicator
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.models import User
@ -18,15 +22,36 @@ from frigate.models import User
logger = logging.getLogger(__name__)
@dataclass
class PushNotification:
user: str
payload: dict[str, Any]
title: str
message: str
direct_url: str = ""
image: str = ""
notification_type: str = "alert"
ttl: int = 0
class WebPushClient(Communicator): # type: ignore[misc]
"""Frigate wrapper for webpush client."""
def __init__(self, config: FrigateConfig) -> None:
def __init__(self, config: FrigateConfig, stop_event: MpEvent) -> None:
self.config = config
self.stop_event = stop_event
self.claim_headers: dict[str, dict[str, str]] = {}
self.refresh: int = 0
self.web_pushers: dict[str, list[WebPusher]] = {}
self.expired_subs: dict[str, list[str]] = {}
self.suspended_cameras: dict[str, int] = {
c.name: 0 for c in self.config.cameras.values()
}
self.notification_queue: queue.Queue[PushNotification] = queue.Queue()
self.notification_thread = threading.Thread(
target=self._process_notifications, daemon=True
)
self.notification_thread.start()
if not self.config.notifications.email:
logger.warning("Email must be provided for push notifications to be sent.")
@ -103,30 +128,144 @@ class WebPushClient(Communicator): # type: ignore[misc]
self.expired_subs = {}
def suspend_notifications(self, camera: str, minutes: int) -> None:
"""Suspend notifications for a specific camera."""
suspend_until = int(
(datetime.datetime.now() + datetime.timedelta(minutes=minutes)).timestamp()
)
self.suspended_cameras[camera] = suspend_until
logger.info(
f"Notifications for {camera} suspended until {datetime.datetime.fromtimestamp(suspend_until).strftime('%Y-%m-%d %H:%M:%S')}"
)
def unsuspend_notifications(self, camera: str) -> None:
"""Unsuspend notifications for a specific camera."""
self.suspended_cameras[camera] = 0
logger.info(f"Notifications for {camera} unsuspended")
def is_camera_suspended(self, camera: str) -> bool:
return datetime.datetime.now().timestamp() <= self.suspended_cameras[camera]
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Wrapper for publishing when client is in valid state."""
# check for updated notification config
_, updated_notification_config = self.config_subscriber.check_for_update()
if updated_notification_config:
self.config.notifications = updated_notification_config
for key, value in updated_notification_config.items():
if key == "_global_notifications":
self.config.notifications = value
if not self.config.notifications.enabled:
return
elif key in self.config.cameras:
self.config.cameras[key].notifications = value
if topic == "reviews":
self.send_alert(json.loads(payload))
decoded = json.loads(payload)
camera = decoded["before"]["camera"]
if not self.config.cameras[camera].notifications.enabled:
return
if self.is_camera_suspended(camera):
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_alert(decoded)
elif topic == "notification_test":
if not self.config.notifications.enabled:
return
self.send_notification_test()
def send_alert(self, payload: dict[str, any]) -> None:
def send_push_notification(
self,
user: str,
payload: dict[str, Any],
title: str,
message: str,
direct_url: str = "",
image: str = "",
notification_type: str = "alert",
ttl: int = 0,
) -> None:
notification = PushNotification(
user=user,
payload=payload,
title=title,
message=message,
direct_url=direct_url,
image=image,
notification_type=notification_type,
ttl=ttl,
)
self.notification_queue.put(notification)
def _process_notifications(self) -> None:
while not self.stop_event.is_set():
try:
notification = self.notification_queue.get(timeout=1.0)
self.check_registrations()
for pusher in self.web_pushers[notification.user]:
endpoint = pusher.subscription_info["endpoint"]
headers = self.claim_headers[
endpoint[: endpoint.index("/", 10)]
].copy()
headers["urgency"] = "high"
resp = pusher.send(
headers=headers,
ttl=notification.ttl,
data=json.dumps(
{
"title": notification.title,
"message": notification.message,
"direct_url": notification.direct_url,
"image": notification.image,
"id": notification.payload.get("after", {}).get(
"id", ""
),
"type": notification.notification_type,
}
),
timeout=10,
)
if resp.status_code in (404, 410):
self.expired_subs.setdefault(notification.user, []).append(
endpoint
)
elif resp.status_code != 201:
logger.warning(
f"Failed to send notification to {notification.user} :: {resp.status_code}"
)
except queue.Empty:
continue
except Exception as e:
logger.error(f"Error processing notification: {str(e)}")
def send_notification_test(self) -> None:
if not self.config.notifications.email:
return
self.check_registrations()
# Only notify for alerts
if payload["after"]["severity"] != "alert":
for user in self.web_pushers:
self.send_push_notification(
user=user,
payload={},
title="Test Notification",
message="This is a test notification from Frigate.",
direct_url="/",
notification_type="test",
)
def send_alert(self, payload: dict[str, Any]) -> None:
if (
not self.config.notifications.email
or payload["after"]["severity"] != "alert"
):
return
self.check_registrations()
state = payload["type"]
# Don't notify if message is an update and important fields don't have an update
@ -155,49 +294,21 @@ class WebPushClient(Communicator): # type: ignore[misc]
# if event is ongoing open to live view otherwise open to recordings view
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"
ttl = 3600 if state == "end" else 0
for user, pushers in self.web_pushers.items():
for pusher in pushers:
endpoint = pusher.subscription_info["endpoint"]
# set headers for notification behavior
headers = self.claim_headers[
endpoint[0 : endpoint.index("/", 10)]
].copy()
headers["urgency"] = "high"
ttl = 3600 if state == "end" else 0
# send message
resp = pusher.send(
headers=headers,
ttl=ttl,
data=json.dumps(
{
"title": title,
"message": message,
"direct_url": direct_url,
"image": image,
"id": reviewId,
"type": "alert",
}
),
)
if resp.status_code == 201:
pass
elif resp.status_code == 404 or resp.status_code == 410:
# subscription is not found or has been unsubscribed
if not self.expired_subs.get(user):
self.expired_subs[user] = []
self.expired_subs[user].append(pusher.subscription_info["endpoint"])
# the subscription no longer exists and should be removed
else:
logger.warning(
f"Failed to send notification to {user} :: {resp.headers}"
)
for user in self.web_pushers:
self.send_push_notification(
user=user,
payload=payload,
title=title,
message=message,
direct_url=direct_url,
image=image,
ttl=ttl,
)
self.cleanup_registrations()
def stop(self) -> None:
pass
logger.info("Closing notification queue")
self.notification_thread.join()

View File

@ -15,7 +15,7 @@ from ws4py.server.wsgirefserver import (
from ws4py.server.wsgiutils import WebSocketWSGIApplication
from ws4py.websocket import WebSocket as WebSocket_
from frigate.comms.dispatcher import Communicator
from frigate.comms.base_communicator import Communicator
from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)

Some files were not shown because too many files have changed in this diff Show More