Recommended hardware
Cameras
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
I recommend Dahua, Hikvision, and Amcrest in that order. Dahua edges out Hikvision because they are easier to find and order, not because they are better cameras. I personally use Dahua cameras because they are easier to purchase directly. In my experience Dahua and Hikvision both have multiple streams with configurable resolutions and frame rates and rock solid streams. They also both have models with large sensors well known for excellent image quality at night. Not all the models are equal. Larger sensors are better than higher resolutions; especially at night. Amcrest is the fallback recommendation because they are rebranded Dahuas. They are rebranding the lower end models with smaller sensors or less configuration options.
Many users have reported various issues with Reolink cameras, so I do not recommend them. If you are using Reolink, I suggest the Reolink specific configuration. Wifi cameras are also not recommended. Their streams are less reliable and cause connection loss and/or lost video data.
Here are some of the camera's I recommend:
- Loryta(Dahua) IPC-T549M-ALED-S3 (affiliate link)
- Loryta(Dahua) IPC-T54IR-AS (affiliate link)
- Amcrest IP5M-T1179EW-AI-V3 (affiliate link)
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
Server
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
Name | Coral Inference Speed | Coral Compatibility | Notes |
---|---|---|---|
Beelink EQ13 (Amazon) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
Detectors
A detector is a device which is optimized for running inferences efficiently to detect objects. Using a recommended detector means there will be less latency between detections and more detections can be run per second. Frigate is designed around the expectation that a detector is used to achieve very low inference speeds. Offloading TensorFlow to a detector is an order of magnitude faster and will reduce your CPU load dramatically. As of 0.12, Frigate supports a handful of different detector types with varying inference speeds and performance.
Google Coral TPU
It is strongly recommended to use a Google Coral. A $60 device will outperform $2000 CPU. Frigate should work with any supported Coral device from https://coral.ai
The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at 1000/10=100
, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
OpenVINO
The OpenVINO detector type is able to run on:
- 6th Gen Intel Platforms and newer that have an iGPU
- x86 & Arm64 hosts with VPU Hardware (ex: Intel NCS2)
- Most modern AMD CPUs (though this is officially not supported by Intel)
More information is available in the detector docs
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | Notes |
---|---|---|---|
Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | |
Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | |
Intel Ultra 5 125H | 320: ~ 10 ms 640: ~ 22 ms | ||
Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | |
Intel i3 12000 | 320: ~ 19 ms 640: ~ 54 ms | ||
Intel i5 1135G7 | 10 - 15 ms | ||
Intel i5 7500 | ~ 15 ms | ||
Intel i5 7200u | 15 - 25 ms | ||
Intel i5 6500 | ~ 15 ms | ||
Intel i5 4590 | ~ 20 ms | ||
Intel i3 8100 | ~ 15 ms | ||
Intel i3 6100T | 15 - 35 ms | Can only run one detector instance | |
Intel Celeron N4020 | 50 - 200 ms | Inference speed depends on other loads | |
Intel Celeron N3205U | ~ 120 ms | Can only run one detector instance | |
Intel Celeron N3060 | 130 - 150 ms | Can only run one detector instance | |
Intel Celeron J4105 | ~ 25 ms | Can only run one |
TensorRT - Nvidia GPU
The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which supports the 12.x series of CUDA libraries. The minimum driver version on the host system must be >=525.60.13
. Also the GPU must support a Compute Capability of 5.0
or greater. This generally correlates to a Maxwell-era GPU or newer, check the TensorRT docs for more info.
Inference speeds will vary greatly depending on the GPU and the model used.
tiny
variants are faster than the equivalent non-tiny model, some known examples are below:
Name | YoloV7 Inference Time | YOLO-NAS Inference Time |
---|---|---|
Quadro P2000 | ~ 12 ms | |
Quadro P400 2GB | 20 - 25 ms | |
RTX 3070 Mobile | ~ 5 ms | |
RTX 3050 | 5 - 7 ms | 320: ~ 10 ms 640: ~ 16 ms |
GTX 1660 SUPER | ~ 4 ms | |
GTX 1070 | ~ 6 ms | |
GTX 1060 6GB | ~ 7 ms |