Search Results
155 results found with an empty search
- PureTech's Advisory Board
PureTech's trusted advisory board Our Trusted Advisory Board General (ret.) Ronald E. Keys General Ron Keys retired from the Air Force in November 2007 after completing a career of over forty years. His last assignment... Read More Gord Helm, MPA Gord Helm is the former Port of Halifax Manager of Security and Operations. Gord served over 20 years in the Royal Canadian Navy and is active in corporate governance... Read More Bill Stuntz Bill Stuntz is a proven business leader with strengths in setting strategic direction, building organizations, and negotiating mergers and acquisitions... Read More Vance Hilderman Vance Hilderman is a 25-year software/systems engineering entrepreneur with repeated success starting and growing... Read More
- PureTech Leadership Team
PureTech Leadership Team Pure Tech Leadership Larry Bowe, Jr. President & CEO Larry Bowe is the Founder and President of PureTech Systems Inc. As an entrepreneur, Larry has been creating shareholder value in the software industry since 1987. Prior to founding PureTech Systems, Inc... Read More Chris Sincock VP Critical Infrastructure Chris brings over 30 years of broad ranging experience in the electronic security industry serving in roles ranging from Product Manager to President. Chris held executive leadership positions with Lenel... Read More Wade Barnes Chief Software Architect Wade Barnes has been in the information technology industry since 1977, having worked in the areas of distributed computing, large systems design, and computer applications in diverse fields including... Read More Ilia Rosenberg VP Federal Sector In his current capacity as VP Federal Sector, Ilia Rosenberg manages all US Federal and International Government pursuits in the area of national security. Leveraging his extensive experience, Ilia drives all... Read More Charlie Farnsworth Operations Manager Charlie Farnsworth, with over 40 years of experience in the software industry, currently serves as the Operations Manager at PureTech Systems. In this position, Charlie is responsible for... Read More Ben Renshaw Director of MENA With over two decades of experience Mr. Ben Renshaw joins the PureTech team as the Director of MENA. In 1998, Mr. Renshaw founded Veltek International Inc. with Mr. Chu Chun of Chun Shin Electronics... Read More
- Surpassing Legacy Standards: Why DRI/DORI are not Metrics for Modern AI Security Systems - Whitepaper | PureTech Systems
< Back Surpassing Legacy Standards: Why DRI/DORI are not Metrics for Modern AI Security Systems - Whitepaper Jan 12, 2026 1. Introduction Some surveillance vendors claim they can automatically detect and classify humans and vehicles at extreme distances—sometimes 1 to 6 miles —using only a few pixels per target. In some cases, claims as low as 2–100 pixels are made. These claims are often justified by citing DRI (Detection, Recognition, Identification) or DORI tables, which were created decades ago for human observers , not autonomous computer-vision systems. At the same time, some vendors attempt to avoid AI and machine learning altogether, relying instead on simple motion detection, thresholding, or rule-based analytics, while still claiming “automatic detection.” Both approaches— misusing DRI/DORI or avoiding ML entirely —lead to systems that fail in real-world deployments. These claims conflict with the physics of imaging , the limitations of sensors , and the fundamental requirements of modern machine-learning (ML) —especially in environments where atmospheric turbulence, reduced contrast, camera shake, background complexity, partial occlusions, animals, and environmental motion are common. This white paper explains why DRI and DORI apply only to human perception, why they cannot be used to predict autonomous classification performance, why ML systems require substantially more pixels on target , and why systems that do not use AI/ML suffer from unacceptably high false-alarm rates . It also explains why long-range conditions require even more margin, and why no company can bypass physics with software. Any extraordinary claim must be validated through a Proof of Concept (POC) . 2. What DRI and DORI Actually Measure 2.1 DRI (Detection, Recognition, Identification) DRI was developed in 1958 to estimate how far a human observer could visually interpret a target using optical or thermal equipment. It describes whether a person can detect that “something is there,” recognize a general category (such as human versus vehicle), or identify a specific type. Humans can often recognize a person with very limited visual information—on the order of 12–16 vertical pixels , which might correspond to roughly: 12 pixels high × ~4 pixels wide ≈ ~48 total pixels , or 16 pixels high × ~5 pixels wide ≈ ~80 total pixels This is possible because the human brain can infer missing detail, guess intent, and apply context. DRI was never designed to evaluate autonomous systems. 2.2 DORI (IEC 62676-4:2015) DORI extends similar ideas to CCTV system design and again describes what a human operator can interpret when viewing video. Recognition-level DORI values often correspond to 7–12 pixels across the target width , still assuming a human is making the judgment. Neither DRI nor DORI evaluates whether a computer can autonomously classify a target, nor do they account for turbulence, camera shake, background complexity, camouflage, or occlusions. 3. Why DRI/DORI Cannot Be Applied to Machine Learning Machine-learning systems such as Convolutional Neural Networks (CNNs) and Transformers classify objects by extracting visual features from the image, including shape, edges, texture gradients, motion consistency, and frame-to-frame stability. If these features do not physically exist in the pixels, the ML system cannot classify the object. For example, a person appearing 10 pixels high × ~3 pixels wide ≈ ~30 total pixels does not contain enough information to reliably determine head shape, limb movement, torso structure, or vehicle geometry. A human observer might guess; an algorithm cannot. DRI/DORI recognition thresholds describe what humans can guess from incomplete data. ML systems require real, measurable information. 4. What Happens If You Do NOT Use AI / Machine Learning It is equally important to understand the consequences of not using AI/ML at all . Systems that rely solely on traditional video analytics—such as simple motion detection, pixel change thresholds, background subtraction, or rule-based logic—lack the ability to understand what is moving. They can detect motion, but they cannot reliably classify it. As a result, non-AI systems typically suffer from: Extremely high false-alarm rates Inability to distinguish humans from animals Inability to reject nuisance motion Poor scalability to large or complex environments 4.1 Why Non-AI Systems Generate Excessive False Alarms Without ML classification, a system must alarm on any motion that meets basic criteria. This includes: Animals Blowing vegetation Shadows Clouds and moving sun patterns Heat shimmer and atmospheric turbulence Camera shake Insects and birds Rain, snow, and dust Rule-based filters can reduce some noise, but they quickly break down in real environments because natural motion is highly variable. As thresholds are tightened to reduce false alarms, real threats are missed. As thresholds are loosened to avoid misses, false alarms explode. This tradeoff cannot be solved without classification. 4.2 Non-AI Systems Cannot Scale As coverage areas grow larger or more complex, non-AI systems become unmanageable: Operators are overwhelmed by alarms Alarm fatigue sets in Systems are ignored or turned down Real threats are lost in noise In practice, many non-AI deployments are eventually disabled or relegated to “monitoring only” because they generate too many alarms to be useful. 4.3 Detection Without Classification Is Operationally Dangerous A system that “detects motion” but cannot determine whether the object is a human, vehicle, animal, or irrelevant noise is not an autonomous security system. It simply shifts the burden to the operator, increasing workload and increasing the chance of human error. This is why modern perimeter security requires both detection and classification , and why AI/ML—used correctly and within physical limits—is essential. 5. Long-Range Physics Further Increase ML Requirements (1–6 Miles) At long ranges, multiple physical effects degrade imagery beyond what DRI/DORI assume: Reduced contrast Background complexity Atmospheric turbulence Camera shake Loss of gradients PureTech mitigates camera-induced motion by performing its proprietary image stabilization as the first processing step , ensuring downstream analytics operate on a stable image. Even so, long-range ML classification requires more pixels , not fewer. 6. Occlusions: Why Real-World Systems Must Design for More Pixel Margin Real environments include frequent occlusions caused by vegetation, terrain, infrastructure, and partial self-occlusion. When only part of a target is visible, the effective usable pixel count drops sharply. For example: 40 px high × 15 px wide ≈ 600 total pixels may be sufficient for a fully visible person. Seeing only half the body may require significantly more total pixels to maintain classification confidence. Designing only to ideal conditions guarantees failure. 7. Independent Evidence: Pixel Requirements for Reliable Autonomous Classification Independent research and industry experience consistently show that reliable autonomous classification cannot be achieved with only a handful of pixels , regardless of algorithm choice or marketing claims. In practical deployments, autonomous classification systems must achieve high probability of correct classification , low false-alarm rates , and low misclassification rates simultaneously. Achieving all three requires substantial spatial and temporal information about the target. Across a wide range of studies and real-world deployments, several consistent observations emerge: Very small targets (on the order of only tens of total pixels) do not contain sufficient structure for reliable autonomous classification. As pixel counts increase into the hundreds of total pixels , classification accuracy improves substantially, particularly when combined with temporal information such as motion consistency. At long ranges, additional factors—including atmospheric turbulence, reduced contrast, background complexity, and partial occlusions—further reduce usable information, increasing the amount of image data (pixels) required to maintain high accuracy. Importantly, there is no single universal pixel threshold that guarantees reliable classification at long range. The effective pixel requirement depends on multiple factors, including sensor modality, environmental conditions, target contrast, degree of occlusion, and system architecture. Systems that rely primarily on static image appearance and single-frame analysis tend to require significantly larger target images (in the thousands) to achieve acceptable performance under degraded conditions. More advanced systems that exploit stabilized imagery, coherent motion over time, and real-world constraints can extract more information from the same imagery—but no credible system can achieve reliable autonomous classification at DRI/DORI recognition levels or at 2 to 10s of total pixels . For long-range applications, it is realistic to expect that classification accuracy improves as available target information (pixel count) grows from a few tens of pixels into the hundreds or more , depending on conditions. Claims of reliable classification far below this regime are not supported by physics, industry experience, or independent research. 8. Training Data: Why Good ML Requires Large, Clean, Real-World Datasets ML performance depends heavily on training data quality and quantity. Modern vision models typically require hundreds of thousands to millions of representative examples. PureTech has been training visible and thermal ML models for 8 years , using hundreds of thousands of real-world images collected under operational conditions. Garbage In, Garbage Out Poor training data leads directly to poor performance. Garbage data includes: Targets that are too small Unrealistic close-ups never seen in deployment Low-contrast imagery Partial fragments without sufficient structure Severe blur or turbulence distortion Incorrect or inconsistent labeling PureTech applies proprietary preprocessing and quality controls to prevent such data from contaminating training. 9. Thermal vs. Visible Imaging Thermal imaging often outperforms visible cameras at long range and at night because it measures emitted heat rather than reflected light. Advantages include better target-background separation, no need for lighting, reduced impact from shadows, and reduced effectiveness of visual camouflage. Thermal does not eliminate physics limits, but it improves signal quality under difficult conditions. 10. MWIR, LWIR, and SWIR Overview LWIR (8–14 µm): uncooled, durable, good short- to medium-range performance MWIR (3–5 µm): superior long-range performance, higher contrast, requires cooling SWIR (~1–2 µm): reflected-light imaging, good detail in low light, poor in fog or total darkness Each has tradeoffs; none can violate physics. 11. PureTech’s Physics-Aligned Multi-Cue Approach PureTech Systems combines: Image stabilization (first step) Terrain-mapped object tracking for real-world size, speed, and direction Motion consistency filtering Shape plausibility checks Speed profiling Contextual and trajectory filtering ML classification applied PureTech holds 16 issued patents covering image processing, stabilization, and computer vision. 12. Why This Matters: Missed Detections, False Alarms, and ROI A missed detection can mean loss of life, loss of critical infrastructure, regulatory penalties, lawsuits, and reputational damage. False alarms waste time, consume resources, cause alarm fatigue, and obscure real threats. Excessive false alarms are functionally equivalent to missed detections because operators stop responding appropriately. 1. Introduction Some surveillance vendors claim they can automatically detect and classify humans and vehicles at extreme distances—sometimes 1 to 6 miles —using only a few pixels per target. In some cases, claims as low as 2–100 pixels are made. These claims are often justified by citing DRI (Detection, Recognition, Identification) or DORI tables, which were created decades ago for human observers , not autonomous computer-vision systems. At the same time, some vendors attempt to avoid AI and machine learning altogether, relying instead on simple motion detection, thresholding, or rule-based analytics, while still claiming “automatic detection.” Both approaches— misusing DRI/DORI or avoiding ML entirely —lead to systems that fail in real-world deployments. These claims conflict with the physics of imaging , the limitations of sensors , and the fundamental requirements of modern machine-learning (ML) —especially in environments where atmospheric turbulence, reduced contrast, camera shake, background complexity, partial occlusions, animals, and environmental motion are common. This white paper explains why DRI and DORI apply only to human perception, why they cannot be used to predict autonomous classification performance, why ML systems require substantially more pixels on target , and why systems that do not use AI/ML suffer from unacceptably high false-alarm rates . It also explains why long-range conditions require even more margin, and why no company can bypass physics with software. Any extraordinary claim must be validated through a Proof of Concept (POC) . 2. What DRI and DORI Actually Measure 2.1 DRI (Detection, Recognition, Identification) DRI was developed in 1958 to estimate how far a human observer could visually interpret a target using optical or thermal equipment. It describes whether a person can detect that “something is there,” recognize a general category (such as human versus vehicle), or identify a specific type. Humans can often recognize an object is a person with very limited visual information—on the order of 12–16 vertical pixels , which might correspond to roughly: 12 pixels high × ~4 pixels wide ≈ ~48 total pixels , or 16 pixels high × ~5 pixels wide ≈ ~80 total pixels This is possible because the human brain can infer missing detail, guess intent, and apply context. DRI was never designed to evaluate autonomous systems. 2.2 DORI (IEC 62676-4:2015) DORI extends similar ideas to CCTV system design and again describes what a human operator can interpret when viewing video. Recognition-level DORI values often correspond to 7–12 pixels across the target width , still assuming a human is making the judgment. Neither DRI nor DORI evaluates whether a computer can autonomously classify a target, nor do they account for turbulence, camera shake, background complexity, camouflage, or occlusions. 3. Why DRI/DORI Cannot Be Applied to Machine Learning Machine-learning systems such as Convolutional Neural Networks (CNNs) and Transformers classify objects by extracting visual features from the image, including shape, edges, texture gradients, motion consistency, and frame-to-frame stability. If these features do not physically exist in the pixels, the ML system cannot classify the object. For example, a person appearing 10 pixels high × ~3 pixels wide ≈ ~30 total pixels does not contain enough information to reliably determine head shape, limb movement, torso structure, or vehicle geometry. A human observer might guess; an algorithm cannot. DRI/DORI recognition thresholds describe what humans can guess from incomplete data. ML systems require real, measurable information. 4. What Happens If You Do NOT Use AI / Machine Learning It is equally important to understand the consequences of not using AI/ML at all . Systems that rely solely on traditional video analytics—such as simple motion detection, pixel change thresholds, background subtraction, or rule-based logic—lack the ability to understand what is moving. They can detect motion, but they cannot reliably classify it. As a result, non-AI systems typically suffer from: Extremely high false-alarm rates Inability to distinguish humans from animals Inability to reject nuisance motion Poor scalability to large or complex environments 4.1 Why Non-AI Systems Generate Excessive False Alarms Without ML classification, a system must alarm on any motion that meets basic criteria. This includes: Animals Blowing vegetation Shadows Clouds and moving sun patterns Heat shimmer and atmospheric turbulence Camera shake Insects and birds Rain, snow, and dust Rule-based filters can reduce some noise, but they quickly break down in real environments because natural motion is highly variable. As thresholds are tightened to reduce false alarms, real threats are missed. As thresholds are loosened to avoid misses, false alarms explode. This tradeoff cannot be solved without classification. 4.2 Non-AI Systems Cannot Scale As coverage areas grow larger or more complex, non-AI systems become unmanageable: Operators are overwhelmed by alarms Alarm fatigue sets in Systems are ignored or turned down Real threats are lost in noise In practice, many non-AI deployments are eventually disabled or relegated to “monitoring only” because they generate too many alarms to be useful. 4.3 Detection Without Classification Is Operationally Dangerous A system that “detects motion” but cannot determine whether the object is a human, vehicle, animal, or irrelevant noise is not an autonomous security system. It simply shifts the burden to the operator, increasing workload and increasing the chance of human error. This is why modern perimeter security requires both detection and classification , and why AI/ML—used correctly and within physical limits—is essential. 5. Long-Range Physics Further Increase ML Requirements (1–6 Miles) At long ranges, multiple physical effects degrade imagery beyond what DRI/DORI assume: Reduced contrast Background complexity Atmospheric turbulence Camera shake Loss of gradients PureTech mitigates camera-induced motion by performing its proprietary image stabilization as the first processing step , ensuring downstream analytics operate on a stable image. Even so, long-range ML classification requires more pixels , not fewer. 6. Occlusions: Why Real-World Systems Must Design for More Pixel Margin Real environments include frequent occlusions caused by vegetation, terrain, infrastructure, and partial self-occlusion. When only part of a target is visible, the effective usable pixel count drops sharply. For example: 40 px high × 15 px wide ≈ 600 total pixels may be sufficient for a fully visible person. Seeing only half the body may require significantly more total pixels to maintain classification confidence. Designing only to ideal conditions guarantees failure. 7. Independent Evidence: Pixel Requirements for Reliable Autonomous Classification Independent research and industry experience consistently show that reliable autonomous classification cannot be achieved with only a handful of pixels , regardless of algorithm choice or marketing claims. In practical deployments, autonomous classification systems must achieve high probability of correct classification , low false-alarm rates , and low misclassification rates simultaneously. Achieving all three requires substantial spatial and temporal information about the target. Across a wide range of studies and real-world deployments, several consistent observations emerge: Very small targets (on the order of only tens of total pixels) do not contain sufficient structure for reliable autonomous classification. As pixel counts increase into the hundreds of total pixels , classification accuracy improves substantially, particularly when combined with temporal information such as motion consistency. At long ranges, additional factors—including atmospheric turbulence, reduced contrast, background complexity, and partial occlusions—further reduce usable information, increasing the amount of image data (pixels) required to maintain high accuracy. Importantly, there is no single universal pixel threshold that guarantees reliable classification at long range. The effective pixel requirement depends on multiple factors, including sensor modality, environmental conditions, target contrast, degree of occlusion, and system architecture. Systems that rely primarily on static image appearance and single-frame analysis tend to require significantly larger target images (in the thousands) to achieve acceptable performance under degraded conditions. More advanced systems that exploit stabilized imagery, coherent motion over time, and real-world constraints can extract more information from the same imagery—but no credible system can achieve reliable autonomous classification at DRI/DORI recognition levels or at 2 to 10s of total pixels . For long-range applications, it is realistic to expect that classification accuracy improves as available target information (pixel count) grows from a few tens of pixels into the hundreds or more , depending on conditions. Claims of reliable classification far below this regime are not supported by physics, industry experience, or independent research. 8. Training Data: Why Good ML Requires Large, Clean, Real-World Datasets ML performance depends heavily on training data quality and quantity. Modern vision models typically require hundreds of thousands to millions of representative examples. PureTech has been training visible and thermal ML models for 8 years , using hundreds of thousands of real-world images collected under operational conditions. Garbage In, Garbage Out Poor training data leads directly to poor performance. Garbage data includes: Targets that are too small Unrealistic close-ups never seen in deployment Low-contrast imagery Partial fragments without sufficient structure Severe blur or turbulence distortion Incorrect or inconsistent labeling PureTech applies proprietary preprocessing and quality controls to prevent such data from contaminating training. 9. Thermal vs. Visible Imaging Thermal imaging often outperforms visible cameras at long range and at night because it measures emitted heat rather than reflected light. Advantages include better target-background separation, no need for lighting, reduced impact from shadows, and reduced effectiveness of visual camouflage. Thermal does not eliminate physics limits, but it improves signal quality under difficult conditions. 10. MWIR, LWIR, and SWIR Overview LWIR (8–14 µm): uncooled, durable, good short- to medium-range performance MWIR (3–5 µm): superior long-range performance, higher contrast, requires cooling SWIR (~1–2 µm): reflected-light imaging, good detail in low light, poor in fog or total darkness Each has tradeoffs; none can violate physics. 11. PureTech’s Physics-Aligned Multi-Cue Approach PureTech Systems combines: Image stabilization (first step) Terrain-mapped object tracking for real-world size, speed, and direction Motion consistency filtering Shape plausibility checks Speed profiling Contextual and trajectory filtering ML classification applied PureTech holds 16 issued patents covering image processing, stabilization, and computer vision. 12. Why This Matters: Missed Detections, False Alarms, and ROI A missed detection can mean loss of life, loss of critical infrastructure, regulatory penalties, lawsuits, and reputational damage. False alarms waste time, consume resources, cause alarm fatigue, and obscure real threats. Excessive false alarms are functionally equivalent to missed detections because operators stop responding appropriately. Organizations that choose systems based solely on lowest acquisition cost often incur far higher total cost of ownership and risk exposure. Investing upfront in systems designed around physics, robust ML, stabilization, terrain mapping, and multi-cue validation delivers far better ROI by avoiding catastrophic failures and operational collapse. 13. Proof of Concept: The Only Valid Verification Any vendor claiming autonomous classification at DRI/DORI pixel levels, at less than several hundred total pixels , especially under extreme long-range and occluded conditions must demonstrate the claim in a Proof of Concept . Physics always wins. 14. Conclusion DRI and DORI describe what humans can infer. They do not describe what autonomous systems require. Systems that ignore ML generate unacceptable false alarms. Systems that misuse ML or ignore physics miss real threats. PureTech Systems delivers reliable autonomous detection and classification by respecting physical reality, using stabilized imagery, terrain-mapped measurements, multi-cue analytics, disciplined ML training, and patented computer-vision technology—producing operational security systems that actually work as demonstrated by its real-world deployments in the most challenging environments such as country borders. Organizations that choose systems based solely on lowest acquisition cost often incur far higher total cost of ownership and risk exposure. Investing upfront in systems designed around physics, robust ML, stabilization, terrain mapping, and multi-cue validation delivers far better ROI by avoiding catastrophic failures and operational collapse. 13. Proof of Concept: The Only Valid Verification Any vendor claiming autonomous classification at DRI/DORI pixel levels, at less than several hundred total pixels , especially under extreme long-range and occluded conditions must demonstrate the claim in a Proof of Concept . Physics always wins. 14. Conclusion DRI and DORI describe what humans can infer. They do not describe what autonomous systems require. Systems that ignore ML generate unacceptable false alarms. Systems that misuse ML or ignore physics miss real threats. PureTech Systems delivers reliable autonomous detection and classification by respecting physical reality, using stabilized imagery, terrain-mapped measurements, multi-cue analytics, disciplined ML training, and patented computer-vision technology—producing operational security systems that actually work as demonstrated by its real-world deployments in the most challenging environments such as country borders. Previous Next
- News
PureTech Systems News PureTech in the News Jan 12, 2026 Surpassing Legacy Standards: Why DRI/DORI are not Metrics for Modern AI Security Systems - Whitepaper Read More Aug 26, 2025 PureTech Systems: Smarter Perimeter Security for a Safer World Visit us at GSX in booth #901 to see how PureTech is redefining perimeter security with smarter, more reliable detection. Read More Aug 18, 2025 PureTech Systems and Clear Align Partner to Deliver Advanced Autonomous Security and Command-and-Control for U.S. Air Force Tactical Security System PureTech Systems and Clear Align partner to deliver a rapidly deployable perimeter security solution for the U.S. Air Force’s Tactical Security System (TSS). Read More Aug 11, 2025 Advancing Border Security: PureTech Powers the Next Phase of Mobile Surveillance Benchmark Delivered 24 New Mobile Vehicle Surveillance Systems with Radar to Enhance Border Security Read More Aug 6, 2025 PureTech Systems to Showcase Innovative Solutions at Leidos Supplier Innovation & Technology Symposium PureTech Systems at Leidos Supplier Innovation & Technology Symposium Read More Jul 21, 2025 Celebrating Innovation: PureTech CEO Larry Bowe Named 2025 Security Innovator PureTech Systems is proud to announce that our President and CEO, Larry Bowe, has been honored with the 2025 Security Innovator Award. Read More Jun 17, 2025 PureTech Systems to Showcase Next-Generation Autonomous Perimeter Protection at SIA’s Perimeter PREVENT 2025 Read More May 28, 2025 PurifAI for TMA's Dispatch Spring Edition PureTech Systems Featured in TMA Dispatch Spring 2025 Read More May 13, 2025 PurifAI | Immix Integration IMMIX announces integration with PureTech Systems PurifAI Read More May 1, 2025 Expanding Horizons, Protecting Perimeters Knowledge Partnership with ISJ Read More Apr 30, 2025 2 Wins for 2025 GOVIES Awards for Excellence in Autonomous Security Solutions PureTech Systems Wins Two 2025 GOVIES Awards Read More Mar 28, 2025 PureTech Systems Integrates PurifAI with IMMIX’s Monitoring Platforms to Transform Video Alarm Validation Revolutionary AI-Powered Solution Drastically Reduces Nuisance Alarms Enhancing Monitoring Efficiency and Effectiveness Read More Feb 5, 2025 A New addition to the PureTech Toolbox Integration with Echodyne’s EchoShield Radar for Enhanced Security and Counter-Drone Solutions Read More Dec 23, 2024 PureTech Systems Inc. Completes Successful Acceptance Testing Rail Intrusion Detection System (RIDS) at Major Rapid Transit Agency Read More Dec 18, 2024 PureTech Systems Unveils Rapid Deploy Autonomous Perimeter Surveillance System (RDAPSS) Now Available with Drone Detection and Tracking Capabilities Read More Dec 12, 2024 PureTech Systems Inc. Introduces PurifAI The Ultimate SaaS Solution for False/Nuisance Alarm Filtering for Mass Video Monitoring Centers Read More Dec 10, 2024 Solving the Nuisance Alarm Problem with PurifAI A Q&A with Chris Sincock, VP of Channel Development Read More Nov 14, 2024 PureTech Systems Announces the Release of PureActiv® Version 16 Featuring Enhanced Geospatial AI-Boosted Video Analytics for Critical Infrastructure Protection Read More Oct 8, 2024 PureTech Systems Inc. Releases New Comprehensive Interface Control Document The new ICD will deliver autonomous perimeter protection capabilities. Read More Oct 2, 2024 PureTech Systems Inc. Awarded Major Command and Control Contract by U.S. Government to Enhance National Security Advanced Integration and AI to extend life of legacy sensors for Government agencies Read More
- Summer 2022 Newsletter | PureTech Systems
< Back Summer 2022 Newsletter Jul 7, 2022 Pure Activ® Counter-UAS for Improved Situational Awareness PureTech's PureActiv Software provides the cability to autonomously detect, classify, locate, track, alert, display video, and optionally counter both short- and long-range aerial threats. Through seamless integrations with mission specific best-in-class sensors, cameras and counter-devices, PureTech drastically enhances situational awareness giving you the precise information you need to make an informed decision on how to counter these new types of air threats.PureActiv® Counter-UAS can be deployed on the R-DAPSS, or as part of a traditional hard-wired system. LEARN MORE Pure Activ® Rapid-Deploy Autonomous Perimeter Surveillance System (R-DAPSS) at a Reduced Infrastructure Cost R-DAPSS enables airports, borders, military bases, seaports, transit agencies, and utilities to quickly deploy a temporary or permanent high fidelity virtual perimeter system at substantially less cost and time than a hard-wired solution. The PureActiv R-DAPSS provides the same best-in-class level of perimeter intrusion detection, auto-verification, and automated deterrence as a hard wired-solution for both ground and air targets at less than half the infrastructure cost. Pure Tech Gives Back PureTech was a proud sponsor of the Exemplary Service Award at the recent Border Patrol Foundation (BPF) Night at the Alamo. This award was given to an Agent who, while on duty, risked his life to save the lives of two children who were at risk of drowning in a river. Autonomous Intrusion Detection at the Edge PureTech can deploy their industry leading video analytics on edge devices, giving customers edge device output to VMS Options: False alarm reduction for radar, fence and motion video analytics, Dynamic map as a video output, Alarms via dry contact, Alarms via software integration, Live video from cameras with annotations, and Recorded alarm video from cameras with annotations. LEARN MORE In Case You Missed It PureTech Systems President & CEO Larry Bowe was selected by GIT Security EMEA as an industry expert to provide his views on Security for Critical Infrastructure. He was one of four industry experts chosen to give a focus interview. During the interview, three questions were asked regarding approach and solutions, challenges and success cases. READ THE FULL INTERVIEW HERE Previous Next
- PureTech Systems: Smarter Perimeter Security for a Safer World | PureTech Systems
< Back PureTech Systems: Smarter Perimeter Security for a Safer World Aug 26, 2025 PureTech Systems is transforming the way organizations protect critical assets with advanced geospatial video analytics and intelligent command-and-control solutions. Our flagship software, PureActiv®, is trusted worldwide to deliver real-time, automated protection across the most demanding environments—from borders and coastlines to energy facilities, transportation hubs, and military installations. What sets PureTech apart is our ability to achieve near zero nuisance alarms. By combining its patented geospatial AI video analytics and other artificial intelligence algorithms, PureActiv filters out false alarms and ensures operators respond only to real threats. This unmatched accuracy not only reduces operational costs but also strengthens security effectiveness. Designed for flexibility, PureActiv seamlessly integrates with existing security investments—cameras, radars, fence sensors, access control systems, and more—eliminating the need for rip-and-replace upgrades. The solution scales as your security needs evolve, protecting every aspect of a perimeter, including fence lines, drainage pipes, waterside approaches, turnstiles, and vehicle or pedestrian gates, with tailored designs that ensure 100% detection coverage. PureTech also enhances situational awareness by delivering precise geo-location, classification, and tracking of intrusions in real time. Whether it’s a person, vehicle, drone, or watercraft, operators gain the actionable intelligence they need to respond decisively. With over two decades of proven deployments, PureTech continues to drive innovation in perimeter protection—helping customers worldwide safeguard lives, secure facilities, and minimize risk. Visit us at GSX in booth #901 to see how PureTech is redefining perimeter security with smarter, more reliable detection. Or schedule a demo now. Previous Next
- ITS | PureTech Systems
Intelligent Traffic Systems (ITS) Schedule a Demo The goal of Intelligent Traffic Systems (ITS) is to ensure the safe and efficient flow of traffic in and about urban areas and cities. By 2050, it is estimated 60% of the world’s population will live in cities (source TechRepublic.com). ITS uses intelligent video and other sensors to detect the movement of vehicles and issues related to traffic such as signal controls, driver safety law violations, vehicle accidents, traffic jams, and wrong-way driving among others. The data from these sensor can be used to improve the lives of citizens and visitors and ensure sustainable cities. Optimizing traffic flow of vehicles including quickly finding parking ensures a great experience and minimizes air pollution. PureTech’s AI video analytics can be used to address many of the detection needs of Intelligent Traffic Systems. Key Automated Detection Capabilities: Cloud and edge based deployments. Presence of vehicles to control intersection lights. Crosswalk incursion to alert pedestrians and drivers. Driver infractions, such is illegal turns, exit median lane violations. Vehicles blocking intersections. Wrong direction of travel. Vehicle counting for traffic statistics. Street and lot parking stalls availability. Parked vehicle duration. Schedule a Demo Pure Tech Products
- PureTech Integrates with Intel's OpenVINO! | PureTech Systems
< Back PureTech Integrates with Intel's OpenVINO! Mar 24, 2023 PHOENIX, Ariz. – PureTech Systems has announced its integration with Intel's OpenVINO to deliver its’ AI-boosted video analytics on edge devices with lower cost, lower power consumption, and at a smaller size. OpenVINO is a powerful toolkit developed by Intel, which allows developers to deploy their computer vision applications on a variety of hardware platforms, including edge devices. By integrating with Intel, PureTech Systems' AI-boosted video analytics can now run on edge devices without the need for a discreet GPU card, which are expensive and power hungry. "Through our partnership with Intel, we are able to make our technology more accessible and affordable for a wider range of customers," said Larry Bowe, President of PureTech Systems. "This is a significant step forward in our mission to deliver the best possible solutions for our customers." This new integration emphasizes the company's commitment to delivering cutting-edge technology. By leveraging its’ AI-boosted video analytics capabilities, PureTech Systems is well-positioned to drive technological innovation for critical infrastructure protection, and to create long-term value for its customers. To learn more about this technology, PureTech Systems will be at the ISC West conference next week in Las Vegas, NV. Come by booth 3055! Previous Next
- PureTech Systems Inc. Awarded Major Contract | PureTech Systems
< Back PureTech Systems Inc. Awarded Major Contract Sep 26, 2024 Phoenix, AZ – PureTech Systems Inc ., the leader in geospatial AI-boosted video analytics for wide-area perimeter and border security, is pleased to announce that it has been awarded a substantial contract to provide its technology to protect the infrastructure of an international maritime waterway. This award highlights PureTech's continued leadership in safeguarding our Nation’s critical transportation infrastructure through time-tested technology and autonomous security solutions. The award comes after an extensive successful pilot performed by the infrastructure management company. As part of the award, PureTech Systems will deploy its patented location-based AI video analytics software, PureActiv©, across multiple locations to detect unauthorized access as ships travel through the waterway system. By leveraging PureTech’s AI-boosted perimeter intrusion detection and autonomous surveillance capabilities, the management company will benefit from real-time situational awareness, enhanced threat detection, and a dramatic decrease in nuisance alarms. These capabilities that allow the customer to utilize their existing cameras will significantly bolster the security of critical assets, ensuring they remain secure and operational in today’s evolving threat landscape. “This contract is a testament to our ability to protect critical infrastructure with precision, reliability and near-zero nuisance alarms,” said Larry Bowe, CEO of PureTech Systems Inc. “We are proud to provide autonomous, AI-driven security solutions that deliver actionable intelligence and superior threat detection capabilities in this new, challenging environment while seamlessly integrating into existing security system infrastructure.” PureTech Systems’ AI-powered PureActiv© software is known for its seamless integration with a wide range of cameras, detection technologies, VMSs, and PSIMs, ensuring comprehensive security coverage for facilities such as power plants, utilities, seaports, railways, and more. With a continued focus on innovation and security, this contract further solidifies PureTech’s position as the industry leader in autonomous perimeter and infrastructure protection software. For more information on PureTech Systems’ patented security solutions, please visit www.puretechsystems.com . Previous Next
- Celebrating Innovation: PureTech CEO Larry Bowe Named 2025 Security Innovator | PureTech Systems
< Back Celebrating Innovation: PureTech CEO Larry Bowe Named 2025 Security Innovator Jul 21, 2025 ! Widget Didn’t Load Check your internet and refresh this page. If that doesn’t work, contact us. Previous Next
- Wade Barnes
Wade Barnes has been in the information technology industry since 1977, having worked in the areas of distributed computing, large systems design, and computer applications in diverse fields including... < Back Wade Barnes Chief Software Architect Wade Barnes has been in the information technology industry since 1977, having worked in the areas of distributed computing, large systems design, and computer applications in diverse fields including the security, health care, financial, and minerals industries. Prior to joining PureTech in 2006, companies he worked for include American Express, Lockheed Martin, and 3M Health Information Systems. He is a winner of the 3M Technical Circle of Excellence Award and the Oblad Award from the University of Utah. Mr. Barnes has B.S. and M.S. degrees in mining engineering and an M.S. degree in computer sciences. Mail Document
- PureTech Completes Deployment of PureActiv for Protection of Power Generation Sites | PureTech Systems
< Back PureTech Completes Deployment of PureActiv for Protection of Power Generation Sites Feb 18, 2020 PHOENIX, Ariz. – PureTech Systems announces it has been awarded another multiple site contract for the deployment of its PureActiv Geospatial AI Video Analytics, multi-Sensor Integration, and Command and Control software. The system will provide wide-area perimeter protection at multiple power generation plants in the United States. This award follows the earlier successful 2020 deployment of PureActiv at multiple Power Generation sites in the U.S. The system integrates PureTech’s market-leading geospatial AI Deep Learning video analytics and other sensor technologies into a seamless Common Operating Picture. The automated system is protecting miles of perimeter from unauthorized intrusion through fence-lines and turnstiles. The additional sites are scheduled to be completed by the end of the year. For security reasons, the client cannot be disclosed. "These deployments demonstrate that our patented perimeter intrusion detection software solution can be successfully deployed on a large scale in a very short time frame" stated Larry Bowe, President of PureTech Systems. "It speaks to the 15 years of investment we have made, not only in market-leading intrusion detection and classification algorithms, but also in ease of deployment and use." Previous Next

