mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-11-25 18:30:56 -06:00
3f05f74ecb
* Initial WIP dockerfile and scripts to add tensorrt support * Add tensorRT detector * WIP attempt to install TensorRT 8.5 * Updates to detector for cuda python library * TensorRT Cuda library rework WIP Does not run * Fixes from rebase to detector factory * Fix parsing output memory pointer * Handle TensorRT logs with the python logger * Use non-async interface and convert input data to float32. Detection runs without error. * Make TensorRT a separate build from the base Frigate image. * Add script and documentation for generating TRT Models * Add support for TensorRT devcontainer * Add labelmap to trt model script and docs. Cleanup of old scripts. * Update detect to normalize input tensor using model input type * Add config for selecting GPU. Fix Async inference. Update documentation. * Update some CUDA libraries to clean up version warning * Add CI stage to build TensorRT tag * Add note in docs for image tag and model support
8 lines
419 B
Plaintext
8 lines
419 B
Plaintext
# NVidia TensorRT Support (amd64 only)
|
|
nvidia-pyindex; platform_machine == 'x86_64'
|
|
nvidia-tensorrt == 8.4.1.5; platform_machine == 'x86_64'
|
|
cuda-python == 11.7; platform_machine == 'x86_64'
|
|
cython == 0.29.*; platform_machine == 'x86_64'
|
|
nvidia-cuda-runtime-cu11 == 11.7.*; platform_machine == 'x86_64'
|
|
nvidia-cublas-cu11 == 11.11.*; platform_machine == 'x86_64'
|
|
nvidia-cudnn-cu11 == 8.7.*; platform_machine == 'x86_64' |