Skip to main content

Recommended hardware

Cameras

Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.

I recommend Dahua, Hikvision, and Amcrest in that order. Dahua edges out Hikvision because they are easier to find and order, not because they are better cameras. I personally use Dahua cameras because they are easier to purchase directly. In my experience Dahua and Hikvision both have multiple streams with configurable resolutions and frame rates and rock solid streams. They also both have models with large sensors well known for excellent image quality at night. Not all the models are equal. Larger sensors are better than higher resolutions; especially at night. Amcrest is the fallback recommendation because they are rebranded Dahuas. They are rebranding the lower end models with smaller sensors or less configuration options.

Many users have reported various issues with Reolink cameras, so I do not recommend them. If you are using Reolink, I suggest the Reolink specific configuration. Wifi cameras are also not recommended. Their streams are less reliable and cause connection loss and/or lost video data.

Here are some of the camera's I recommend:

I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.

Server

My current favorite is the Beelink EQ12 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.

NameCoral Inference SpeedCoral CompatibilityNotes
Beelink EQ12 (Amazon)5-10msUSBDual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras.
Intel NUC (Amazon)5-10msUSBOverkill for most, but great performance. Can handle many cameras at 5fps depending on typical amounts of motion. Requires extra parts.

Detectors

A detector is a device which is optimized for running inferences efficiently to detect objects. Using a recommended detector means there will be less latency between detections and more detections can be run per second. Frigate is designed around the expectation that a detector is used to achieve very low inference speeds. Offloading TensorFlow to a detector is an order of magnitude faster and will reduce your CPU load dramatically. As of 0.12, Frigate supports a handful of different detector types with varying inference speeds and performance.

Google Coral TPU

It is strongly recommended to use a Google Coral. A $60 device will outperform $2000 CPU. Frigate should work with any supported Coral device from https://coral.ai

The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.

The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai

A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at 1000/10=100, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.

OpenVINO

The OpenVINO detector type is able to run on:

  • 6th Gen Intel Platforms and newer that have an iGPU
  • x86 & Arm64 hosts with VPU Hardware (ex: Intel NCS2)
  • Most modern AMD CPUs (though this is officially not supported by Intel)

More information is available in the detector docs

Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below:

NameInference SpeedNotes
Intel NCS2 VPU60 - 65 msMay vary based on host device
Intel Celeron J4105~ 25 msInference speeds on CPU were 150 - 200 ms
Intel Celeron N3060130 - 150 msInference speeds on CPU were ~ 550 ms
Intel Celeron N3205U~ 120 msInference speeds on CPU were ~ 380 ms
Intel Celeron N402050 - 200 msInference speeds on CPU were ~ 800 ms, greatly depends on other loads
Intel i3 6100T15 - 35 msInference speeds on CPU were 60 - 120 ms
Intel i3 8100~ 15 msInference speeds on CPU were ~ 65 ms
Intel i5 4590~ 20 msInference speeds on CPU were ~ 230 ms
Intel i5 6500~ 15 msInference speeds on CPU were ~ 150 ms
Intel i5 7200u15 - 25 msInference speeds on CPU were ~ 150 ms
Intel i5 7500~ 15 msInference speeds on CPU were ~ 260 ms
Intel i5 1135G710 - 15 ms
Intel i5 12600K~ 15 msInference speeds on CPU were ~ 35 ms

TensorRT - Nvidia GPU

The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which supports the 12.x series of CUDA libraries. The minimum driver version on the host system must be >=525.60.13. Also the GPU must support a Compute Capability of 5.0 or greater. This generally correlates to a Maxwell-era GPU or newer, check the TensorRT docs for more info.

Inference speeds will vary greatly depending on the GPU and the model used. tiny variants are faster than the equivalent non-tiny model, some known examples are below:

NameInference Speed
GTX 1060 6GB~ 7 ms
GTX 1070~ 6 ms
GTX 1660 SUPER~ 4 ms
RTX 30505 - 7 ms
RTX 3070 Mobile~ 5 ms
Quadro P400 2GB20 - 25 ms
Quadro P2000~ 12 ms

Community Supported:

Nvidia Jetson

Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will make use of the Jetson's hardware media engine when configured with the appropriate presets, and will make use of the Jetson's GPU and DLA for object detection when configured with the TensorRT detector.

Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.

Rockchip SoC

Frigate supports SBCs with the following Rockchip SoCs:

  • RK3566/RK3568
  • RK3588/RK3588S
  • RV1103/RV1106
  • RK3562

Using the yolov8n model and an Orange Pi 5 Plus with RK3588 SoC inference speeds vary between 20 - 25 ms.

What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)

This is taken from a user question on reddit. Modified slightly for clarity.

CPU Usage: I am a CPU, Mendel is a Google Coral

My buddy Mendel and I have been tasked with keeping the neighbor's red footed booby off my parent's yard. Now I'm really bad at identifying birds. It takes me forever, but my buddy Mendel is incredible at it.

Mendel however, struggles at pretty much anything else. So we make an agreement. I wait till I see something that moves, and snap a picture of it for Mendel. I then show him the picture and he tells me what it is. Most of the time it isn't anything. But eventually I see some movement and Mendel tells me it is the Booby. Score!

What happens when I increase the resolution of my camera?

However we realize that there is a problem. There is still booby poop all over the yard. How could we miss that! I've been watching all day! My parents check the window and realize its dirty and a bit small to see the entire yard so they clean it and put a bigger one in there. Now there is so much more to see! However I now have a much bigger area to scan for movement and have to work a lot harder! Even my buddy Mendel has to work harder, as now the pictures have a lot more detail in them that he has to look at to see if it is our sneaky booby.

Basically - When you increase the resolution and/or the frame rate of the stream there is now significantly more data for the CPU to parse. That takes additional computing power. The Google Coral is really good at doing object detection, but it doesn't have time to look everywhere all the time (especially when there are many windows to check). To balance it, Frigate uses the CPU to look for movement, then sends those frames to the Coral to do object detection. This allows the Coral to be available to a large number of cameras and not overload it.

Do hwaccel args help if I am using a Coral?

YES! The Coral does not help with decoding video streams.

Decompressing video streams takes a significant amount of CPU power. Video compression uses key frames (also known as I-frames) to send a full frame in the video stream. The following frames only include the difference from the key frame, and the CPU has to compile each frame by merging the differences with the key frame. More detailed explanation. Higher resolutions and frame rates mean more processing power is needed to decode the video stream, so try and set them on the camera to avoid unnecessary decoding work.