Get wide-area intrusion protection and reliable 24/7 detection with a fusion of two powerful technologies: video and radar. This unique device provides state-of-the-art deep learning-powered object classification for next-level detection and visualization.
Fusion is performed on two levels in the radar-video fusion camera:
1. Visual fusion: The radar detections and classifications are fused in the video, resulting in a visualization of the radar inside the video.
2. Analytics fusion: The radar detections are fused with the detections and classifications from the video analytics, resulting in a combined analytics output where the respective strengths of both technologies are merged:
Example:
1. If an object appears at 50 meters distance from the device, it may be too small for the video analytics to detect, but the radar would pick it up.
2. The radar detections are fused into the image plane and can be used to raise events inside Axis Object Analytics, or a search based on the analytics metadata stream.
3. The visual fusion guides the operator to where an event occurred since it is mapped to the visual plane.
4. As the object approaches, it is detected by video analytics.
5. The radar detections are fused with video analytics and the combined output is of higher quality and with more information than the technologies can provide individually.
The device fuses two powerful technologies (radio frequency and high-resolution video) to deliver reliable detection 24/7 regardless of weather and lighting conditions.
AXIS Q1656-DLE analyzes two sources to provide full situational awareness and availability:
AXIS Q1656-DLE Radar-Video Fusion Camera and AXIS Q1656-LE Box Camera have almost the same naming, but there is an enormous difference in their capabilities. Using AXIS Q1656-DLE provides you with:
Fusion relies on the factory calibration of both technologies. Do not change the lens or tamper with the radar unit as fusion may break.
It is recommended to have 50 lux in the detection area (see Axis Object Analytics user manual) to have reliable detection. Leaving the light on permanently during the night can incur high electricity costs, especially when larger areas need to be covered.
As an alternative to using permanent illumination, you can use radar motion detection as a trigger for the external illuminators. You can individually control illuminators mounted at different positions by connecting different radar zones to the different illuminators.
If the bounding box is not located exactly in the right place, it’s because there is no video analytics detection there. You are seeing the projection of the radar detection in the image, and that is not as accurate as a video analytics box. If the box is too high or low, make sure that the installation height is set correctly. It could also be due to elevation differences in the scene, such as a sloping road, a hill, or a depression.
Before you start, we recommend that you read the radar integration guidelines and metadata integration guidelines as you will get data from both technologies with AXIS Q1656-DLE.
In terms of integration, AXIS Q1656-DLE have two video channels and two analytics metadata streams corresponding to the video streams:
RTSP URL for camera channel with video and audio:
/axis-media/media.amp?camera=1&video=1&audio=1
RTSP URL for radar channel with video and audio:
/axis-media/media.amp?camera=2&video=1&audio=1
RTSP URL for analytics radar-video fusion metadata stream for the camera channel:
/axis-media/media.amp?camera=1&video=0&audio=0&analytics=polygon
The feature flag is available from AXIS OS 11.2. Enabling the feature flag will not be needed starting from AXIS OS 11.5 (Scheduled for May 2023).
You can enable the flag by entering the command:
'
http://192.168.0.90/axis-cgi/featureflag.cgi'
\
"Content-Type: application/json"
--data \
'{"apiVersion": "1.0", "method": "set", "params":{"flagValues":{"radar_video_fusion_metadata":true}}, "context": " "}'
Which yields to:
{
"apiVersion": "1.0",
"context": " ",
"method": "listAll",
"data": {
"flags": [
{
"name": "radar_video_fusion_metadata",
"value": true,
"description": "Include Radar Video Fusion in AnalyticsSceneDescription metadata.",
"defaultValue": false
},
To verify that it is enabled, the command below list the enabled feature flags:
'
http://192.168.0.90/axis-cgi/featureflag.cgi'
\
"Content-Type: application/json"
--data \
'{"apiVersion": "1.0", "method": "listAll", "context": " "}'
Which yields to:
{
"apiVersion": "1.0",
"context": " ",
"method": "listAll",
"data": {
"flags": [
{
"name": "radar_video_fusion_metadata",
"value": false,
"description": "Include Radar Video Fusion in AnalyticsSceneDescription metadata.",
"defaultValue": false
},
Restart the device
The new fields in the Radar-Video Fusion metadata stream should be present. Switching the radar transmission off also changes the metadata stream (in real time).
RTSP URL for analytics metadata stream for the radar channel:
/axis-media/media.amp?camera=2&video=0&audio=0&analytics=polygon
RTSP URL for the event stream is the same as all other Axis devices and it is valid for channels:
/axis-media/media.amp?video=0&audio=0&event=on
AXIS Q1656-DLE is placed on the active track and will get the same capabilities of AXIS OS.
A sample frame with the new fields can look like this (new fields are marked as bold)
<?xml version="1.0" ?>
<tt:SampleFrame xmlns:tt="http://www.onvif.org/ver10/schema" Source="AnalyticsSceneDescription">
<tt:Object ObjectId="101">
<tt:Appearance>
<tt:Shape>
<tt:BoundingBox left="-0.6" top="0.6" right="-0.2" bottom="0.2"/>
<tt:CenterOfGravity x="-0.4" y="0.4"/>
<tt:Polygon>
<tt:Point x="-0.6" y="0.6"/>
<tt:Point x="-0.6" y="0.2"/>
<tt:Point x="-0.2" y="0.2"/>
<tt:Point x="-0.2" y="0.6"/>
</tt:Polygon>
</tt:Shape>
<tt:Color>
<tt:ColorCluster>
<tt:Color X="255" Y="255" Z="255" Likelihood="0.8" Colorspace="RGB"/>
</tt:ColorCluster>
</tt:Color>
<tt:Class>
<tt:ClassCandidate>
<tt:Type>Vehical</tt:Type>
<tt:Likelihood>0.75</tt:Likelihood>
</tt:ClassCandidate>
<tt:Type Likelihood="0.75">Vehicle</tt:Type>
</tt:Class>
<tt:VehicleInfo>
<tt:Type Likelihood="0.75">Bus</tt:Type>
</tt:VehicleInfo>
<tt:GeoLocation lon="-0.000254295" lat="0.000255369" elevation="0"/>
<tt:SphericalCoordinate Distance="40" ElevationAngle="45" AzimuthAngle="88"/>
</tt:Appearance>
<tt:Behaviour>
<tt:Speed>20</tt:Speed>
<tt:Direction yaw="20" pitch="88"/>
</tt:Behaviour>
</tt:Object>
<tt:ObjectTree>
<tt:Delete ObjectId="1"/>
</tt:ObjectTree>
</tt:SampleFrame>
For video-only targets the metadata would still look like this:
<?xml version="1.0" ?>
<tt:SampleFrame xmlns:tt="http://www.onvif.org/ver10/schema" Source="AnalyticsSceneDescription">
<tt:Object ObjectId="101">
<tt:Appearance>
<tt:Shape>
<tt:BoundingBox left="-0.6" top="0.6" right="-0.2" bottom="0.2"/>
<tt:CenterOfGravity x="-0.4" y="0.4"/>
<tt:Polygon>
<tt:Point x="-0.6" y="0.6"/>
<tt:Point x="-0.6" y="0.2"/>
<tt:Point x="-0.2" y="0.2"/>
<tt:Point x="-0.2" y="0.6"/>
</tt:Polygon>
</tt:Shape>
<tt:Color>
<tt:ColorCluster>
<tt:Color X="255" Y="255" Z="255" Likelihood="0.8" Colorspace="RGB"/>
</tt:ColorCluster>
</tt:Color>
<tt:Class>
<tt:ClassCandidate>
<tt:Type>Vehical</tt:Type>
<tt:Likelihood>0.75</tt:Likelihood>
</tt:ClassCandidate>
<tt:Type Likelihood="0.75">Vehicle</tt:Type>
</tt:Class>
<tt:VehicleInfo>
<tt:Type Likelihood="0.75">Bus</tt:Type>
</tt:VehicleInfo>
</tt:Appearance>
</tt:Object>
<tt:ObjectTree>
<tt:Delete ObjectId="1"/>
</tt:ObjectTree>
</tt:SampleFrame>
For radar-only targets with no video history, the metadata would still look like this:
<?xml version="1.0" ?>
<tt:SampleFrame xmlns:tt="http://www.onvif.org/ver10/schema" Source="AnalyticsSceneDescription">
<tt:Object ObjectId="101">
<tt:Appearance>
<tt:Shape>
<tt:BoundingBox left="-0.6" top="0.6" right="-0.2" bottom="0.2"/>
<tt:CenterOfGravity x="-0.4" y="0.4"/>
<tt:Polygon>
<tt:Point x="-0.6" y="0.6"/>
<tt:Point x="-0.6" y="0.2"/>
<tt:Point x="-0.2" y="0.2"/>
<tt:Point x="-0.2" y="0.6"/>
</tt:Polygon>
</tt:Shape>
<tt:Class>
<tt:ClassCandidate>
<tt:Type>Vehical</tt:Type>
<tt:Likelihood>0.75</tt:Likelihood>
</tt:ClassCandidate>
<tt:Type Likelihood="0.75">Vehicle</tt:Type>
</tt:Class>
<tt:VehicleInfo>
<tt:Type Likelihood="0.75">Vehicle</tt:Type>
</tt:VehicleInfo>
<tt:GeoLocation lon="-0.000254295" lat="0.000255369" elevation="0"/>
<tt:SphericalCoordinate Distance="40" ElevationAngle="45" AzimuthAngle="88"/>
</tt:Appearance>
<tt:Behaviour>
<tt:Speed>20</tt:Speed>
<tt:Direction yaw="20" pitch="88"/>
</tt:Behaviour>
</tt:Object>
<tt:ObjectTree>
<tt:Delete ObjectId="1"/>
</tt:ObjectTree>
</tt:SampleFrame>
GeoLocation
The GeoLocation is presented like this:
<tt:GeoLocation lon="-0.000254295" lat="0.000255369" elevation="0"/>
Spherical Coordinate:
The Spherical Coordinate is presented like this:
<tt:SphericalCoordinate Distance="40" ElevationAngle="45" AzimuthAngle="135"/>
Speed:
The Speed is presented like this:
<tt:Speed>20</tt:Speed>
Direction of movement:
The direction of movement is presented like this:
<tt:Direction yaw="20" pitch="88"/>