The Ultimate Perimeter Duo? Radar + Video Analytics

This article first appeared in Security Middle East magazine

Today’s security market offers a wide array of highly advanced sensors and systems, all designed to help protect perimeters, monitor assets, and safeguard lives.  The reason for the many choices stems from the fact that different sensors have their sweet spot in terms of where they perform best – whether that be a certain environment, a type of intrusion or even expected performance based on budgetary requirements.

SpotterRF Radar and PureActiv Video Analytics

Combining security sensor technologies expands the capability beyond the two sensors acting independently

A typical approach by sensor manufacturers is to continue to innovate, expand its sensor’s capabilities, enhancing its strengths and minimizing its weaknesses.  Another approach to the problem of expanding the “sweet spot” is to combine the capabilities of current sensors, allowing them to work together as more intelligent solution, with all the advantages they possessed as individual entities, but with a vast amount of new capabilities gained from their union with a complementary technology.  One such collaboration is the integration of perimeter radar with video analytics.

Stand Alone Sensors:

As individual sensors, radars and video analytics are both excellent perimeter detection sensors.  A single radar boasts a very large coverage range and performs in all types of weather conditions.  Similarly, video analytics is highly prized for its ability to discriminate targets, understanding very specific details and behavior through the use of both fixed and movable (PTZ) cameras.

Despite their advanced surveillance capability, no sensor is perfect.  Video analytics requires a view of the scene, so extreme weather can prove challenging.  Likewise, as detection ranges become longer, the effective coverage area becomes smaller, due to the reduced field of view of a zoomed in camera.  Likewise, radars suffer from the fact that they cannot always accurately identify the target.  For each radar detection, a user must obtain visual confirmation before taking action.  For some installations, they can also be prone to reflections and ground clutter.

The magic happens when these sensors are integrated into a collaborative system.  In addition to each covering the other’s challenges, it also introduces a wealth of new features and increased capability.

Integration:

Radar and Location Based Video Analytics communicate easily using map-based coordinates

The integration of location-based video analytics and radar is actually quite simple, leveraging the shared geospatial context over XML.

Before addressing how these sensors perform as a team, it’s prudent to first address how the combination is achieved.  The actual integration of these two systems is not complex in terms of how they share and coordinate data.  Radars communicate using a well-defined interface standard, with position and target information usually provided to the analytics engine via XML or similar standards based interface.  Radars identify position as a “range and bearing.”  The key to the integration is the use of geospatial, or location-based, video analytics.  In simple terms, this form of intelligent video not only understands the nuances of the video pixel information, but it also understands where each pixel in located in 3D map space.  Converting the radar’s range and bearing information into longitude, latitude and elevation coordinates allows the two systems to easily share and collaborate target information.

The ability of each sensor to cover the challenges experienced by the other, is in itself, an extreme value-add to any critical facility looking for improved surveillance capability.  However, the integration between a radar and video analytics goes well beyond this capability, providing increased situational awareness, more precise detection/target identification, reduced nuisance alarm rates, and the capability for a complete automated response.

Situational Awareness

The key to any surveillance system is its ability to accurately and quickly communicate the details of an event to the security operator.  When using multiple systems, such as radar, GPS, radar and even a fence sensor, when the sensor outputs are not integrated a single event may result in many target tracks and object icons on the user interface.  This can be due to many factors, including differences in update rates, sensor accuracy or a variety of conclusions on the type of target detected.  One feature of the collaboration of radar and intelligent video is the ability to merge these tracks.  So instead of seeing multiple tracks and multiple detection icons, all resulting from the same target, the solution can merge the target information into a single icon and a single combined track.  The result is reduced clutter, which allows the operator to more quickly understand the situation and take appropriate actions.

Increased situational awaress and decreased clutter when using a map-based display

The integration of video analytics and radar provides for the merging of object icons and tracks, decluttering the user interface for the security responder.

Detection / Target Identification

Another key benefit from the integration of radar and intelligent video it the idea of alarm collaboration.   Both sensors have the ability to detect and identify various characteristics of a potential target.  However, as a combined sensor enabled with the ability to evaluate all the data collected from both technologies, the resulting target detection and identification becomes extremely accurate and provides the user with a robust set of data related to the target.  This reduces a wide range of potential nuisance alarms and environmental alarms, which either sensor on its own may have had issues with identifying.  This feature also provides for more precise alarm conditions than are typically achievable with only a single sensor type. This can include very specific conditions around location, type of target, location relative to other targets or assets, as well as, target behavior.  The vast amount of target data collected from the combined sensors can be provided to the user as a combined data set, reducing the need to search for this data across several systems, or having to manually confirm items.

Automated Response

Perhaps one of the most exciting aspects of combining these sensors is their ability to automate many of the first response actions.  Doing so, frees security officials to react to the situation and not have the added responsibility to manually maintain surveillance of the intrusion.   A powerful feature which enables this force multiplier is slew to cue or slew to radar.  When a new radar target appears, this information is communicated to the video system, which in turn selects the most appropriate camera, or cameras, and automatically steers them to the point of intrusion and centers the target in the camera view.  With the target under automatic surveillance, command and control software can cause the sensors to collaborate on the validity of the target, and automatically provide the information to the operator for visual confirmation.  Once a target is confirmed, the user interface provides the ability to change the map-based icon to reflect the confirmed nature of the target from “unknown” to “friend” or “foe.”  Through continued coordination, the system will then remember this target identification as long as the track persists.

Slew to target (slew to cue) using a PTZ camera, video analytics and a radar

Sensor collaboration affords a high level of automated response, including slew to cue and camera auto follow.

Slew to target further extends to more advanced automation, including camera auto follow, radar follow and camera handoff.  Once a camera has been cued to a location by either a video detection or a radar detection, video analytics has the capability to lock onto the target and automatically follow the target by continuously adjusting the pan, tilt and zoom functionality of the camera, keeping the target centered in the camera’s field of view.  Should the target leave the coverage area of a camera, the radar can now steer the camera using its position data to maintain a continued view of the target.  Working together, the sensors can then determine when the target’s path enters into an area covered by another camera, at which time they can issue a slew to target command and allow the new camera to begin the auto follow duties.  During these types of scenarios, the collaboration also boasts the ability to provide a visual record of the sensor’s actions on a map-based user interface, including merged tracks, sensor in control annunciations, visual alarms, as well as, the ability for the user to take manual control.

Conclusion

Security manufacturers continue to innovate, making their sensors more reliable, more accurate and with the ability to perform under more varied conditions and scenarios.  However, sometimes these objectives can be achieved without the need to reinvent a sensor, but rather combine the capabilities of existing sensors through powerful software to provide a solution that is more than just the sum of the individual sensors.  Case in point is the collaboration of video analytics and radar technology, the combination of which can provide critical facilities increased situational awareness, more precise detection/target identification and the capability for automated system response, well beyond what each sensor could provide if operating independently.

Additional Viewing

Video – Combining radar, gps and video targets on a map-based GUI

Video – Camera Follow using a radar signal

Video – Map-Based Target Selection

Video – Intrusion Scenario (Narrated)

This entry was posted in Geospatial, Perimeter Surveillance, Security Sensor Integration, Video Analytics, VMS and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.