Next-level detection and visualization

Get wide-area intrusion protection and reliable 24/7 detection with a fusion of two powerful technologies: video and radar. This unique device provides state-of-the-art deep learning-powered object classification for next-level detection and visualization.

Introduction

How does radar-video fusion cameras work?

Fusion is performed on two levels in the radar-video fusion camera:

1. Visual fusion: The radar detections and classifications are fused in the video, resulting in a visualization of the radar inside the video.

2. Analytics fusion: The radar detections are fused with the detections and classifications from the video analytics, resulting in a combined analytics output where the respective strengths of both technologies are merged:

  • The distance, position, speed and direction from the radar
  • The position in the video plan and class from the video


Example:

1. If an object appears at 50 meters distance from the device, it may be too small for the video analytics to detect, but the radar would pick it up.

2. The radar detections are fused into the image plane and can be used to raise events inside Axis Object Analytics, or a search based on the analytics metadata stream.

3. The visual fusion guides the operator to where an event occurred since it is mapped to the visual plane.

4. As the object approaches, it is detected by video analytics.

5. The radar detections are fused with video analytics and the combined output is of higher quality and with more information than the technologies can provide individually. 

The device fuses two powerful technologies (radio frequency and high-resolution video) to deliver reliable detection 24/7 regardless of weather and lighting conditions.

Why radar-video fusion?

  • Your video analytics work well in close proximity, but may fail to detect objects at a far distance or in dark conditions.
  • Your radar work well overall, but from time to time it may miss object classifications. The same applies to the camera where low light or IR reflection from rain could impair the video.
     

AXIS Q1656-DLE analyzes two sources to provide full situational awareness and availability:

  • The device fuses the tracks from both technologies, leading to better performance by combining video analytics with the strenghts of radar technology such as speed, distance, and direction of movement.
  • The combination provides the perfect platform for analytics ensuring greater accuracy where each technology complements the other.
  • By adding extra dimensions such as speed and distance to your camera, the camera becomes an even more powerful tool to accurately detect and classify objects of interest.

What's the difference between AXIS Q1656-DLE and AXIS Q1656-LE?

AXIS Q1656-DLE Radar-Video Fusion Camera and AXIS Q1656-LE Box Camera have almost the same naming, but there is an enormous difference in their capabilities.  Using AXIS Q1656-DLE provides you with:

  • Two devices in one: a radar and a camera for effective cost on maintenance and installation
  • Unique visual and analytics fusion between radar and video
  • Absolute speed and distance from the radar inside analytics metadata and Axis Object Analytics
  • The fusion gives more detections on distance, in challenging weather and light, and the possibility to choose preferred sensitivity on the detections
note

Fusion relies on the factory calibration of both technologies. Do not change the lens or tamper with the radar unit as fusion may break.

How will external white light illumination improve detection performance? 

It is recommended to have 50 lux in the detection area (see Axis Object Analytics user manual) to have reliable detection. Leaving the light on permanently during the night can incur high electricity costs, especially when larger areas need to be covered. 

As an alternative to using permanent illumination, you can use radar motion detection as a trigger for the external illuminators. You can individually control illuminators mounted at different positions by connecting different radar zones to the different illuminators.  

Why is the bounding box not covering the object precisely?

If the bounding box is not located exactly in the right place, it’s because there is no video analytics detection there. You are seeing the projection of the radar detection in the image, and that is not as accurate as a video analytics box. If the box is too high or low, make sure that the installation height is set correctly. It could also be due to elevation differences in the scene, such as a sloping road, a hill, or a depression.

What are the application areas?

  • AXIS Q1656-DLE is designed for outdoor installation in open area coverage for cases such as high accuracy detection for critical infrastructure.
  • AXIS Q1656-DLE can also be used for parking lot monitoring, as the user can enjoy advanced video analytics along with additional radar parameters like distance to the moving object and speed. The maximum speed supported is 55 km/h (35 mph), making it perfect for traffic monitoring in such cases.

Are there any limitations?

  • By the launch of AXIS Q1656-DLE, the capabilities of the analytics metadata stream will be available under the feature flag starting from AXIS OS 11.2 firmware.
  • AXIS Q1656-DLE is not designed to be used for people counting, especially not in crowded areas.

Axis Developer Documentation