AI now sits on the frontline of public safety: scanning camera feeds for weapons, triaging alerts in school corridors, and surfacing threats in near real-time. Proponents say it can shorten response times, deter violence, and free officers from routine monitoring. Critics counter that errors are inevitable — and when policing is involved, those errors have human consequences.
Two recent BBC reports highlight both sides. In Baltimore County (US), armed officers handcuffed a 16-year-old after an AI system flagged what turned out to be a packet of crisps as a gun. In Dorset (UK), police resources were tied up by an “AI homeless man” prank — AI-generated images sent to alarm family members into calling 999. Together, the episodes reveal how AI can amplify safety or fear, depending less on the model than on the procedures wrapped around it.
Case 1: A Crisp Packet, a Gun Alert, and Eight Patrol Cars
The Baltimore County incident unfolded after an AI gun-detection system alerted on what looked like a firearm. The vendor says the image was then reviewed by humans and cleared, but that the subsequent school-side communication failed: the principal, missing the “no threat” update, contacted the School Resource Officer; police arrived in force and handcuffed the student, later confirming there was no weapon and no arrest.
Key takeaways from the episode:
- AI did not act alone. A human review reportedly overruled the machine, but the downstream chain failed. That makes this as much an information governance issue as a model accuracy problem.
- Error costs escalate quickly in policing. Even short-lived misclassifications can lead to armed confrontations, with lasting effects on a teenager’s sense of safety — and on community trust.
- Vendors vs. outcomes. The provider argued its process “operated as designed” (rapid detection and human verification). From the public’s perspective, the outcome — guns drawn on a child with snacks — is what matters.
In short: the system’s technical loop closed; the operational loop did not. The distance between “AI alert” and “police action” is a policy space, and it failed.
Case 2: AI-Generated ‘Intruders’ and Real 999 Responses
In Dorset, police warned about a social-media trend where AI-generated images (a stranger eating, sleeping, refusing to leave) are sent to family to simulate a trespasser. One “extremely concerned parent” called 999; it was a prank. The force said valuable deployable resources were diverted from genuine emergencies and urged the public to verify before dialling.
Here the model didn’t misclassify anything — people did. But the effect is similar: false signals push scarce resources into the wrong place at the wrong time. The incident underlines a broader shift: AI can now fabricate plausible evidence at household scale, making hoaxes cheaper, faster, and more believable than before.
What AI Is Good At in Policing
- Always-on monitoring where attention is thin. Computer vision can watch feeds humans cannot, surfacing possible threats faster than manual review.
- Deterrence by visibility. Public knowledge that gun detection or anomaly spotting is in place can discourage some behaviours.
- Force multiplier for limited staff. Automated triage can free officers for in-person tasks that require judgment, empathy, and discretion.
These are non-trivial gains. In settings with real, repeated risk, shaving minutes off detection or focusing attention on the right doorway can save lives.
Where It Breaks — and Why
- False positives in high-stakes contexts. Even very low error rates produce regular crises at scale. A single misfire can escalate into armed responses or traumatic stops.
- Human-in-the-loop ≠ human-in-control. Verification only helps if updates travel reliably and quickly to the exact decision-maker — the principal, the SRO, the dispatch desk. If that chain is leaky, human review becomes a paper safeguard.
- Procedural brittleness. Who cancels an alert? How is “all clear” propagated? When do patrols stand down? Without drilled SOPs (standard operating procedures), well-meant caution becomes over-deployment.
- Vulnerability to fabricated signals. AI-generated images and messages create swatting-adjacent dynamics: realistic enough to trigger urgent responses, common enough to strain call-handling and trust.
To reap benefits while reducing harm, agencies and schools need to treat AI systems as socio-technical — equal parts model, policy, training, and oversight.
1) Accuracy, Transparency, Accountability
- Publish measured error rates (overall, by context, by lighting/angle), not marketing claims. Make false-positive/false-negative trade-offs explicit.
- Third-party evaluation and periodic re-testing; independent audits after significant incidents.
2) Communications That Actually Close the Loop
- Binding escalation rules: No armed deployment unless a verified threat reaches a defined threshold and a named human authorises it.
- Automatic “clear” propagation: If a human reviewer clears an alert, that status must immediately reach the SRO/dispatch system and cancel any pending field action.
- Single source of truth: Lock alert state behind a shared dashboard; no parallel text chains that drift out of sync.
3) Human Factors and Training
- Scenario drills that include false positives and false negatives. Practice standing down as carefully as standing up.
- Bias and dignity safeguards: Policies to reduce disproportionate impacts on young people and minorities; body-worn video review when AI-triggered stops occur.
4) Responsible Use of Generative AI Evidence
- Call-taker scripts to validate AI-generated images (request live video, a shared safe word, or corroborating details) before dispatching high-risk responses.
- Public education campaigns: how to spot likely AI fakes; when to call 999; legal consequences for malicious hoaxes.
5) Procurement With Guardrails
- Contracts that require logs, audit access, and post-incident cooperation.
- Exit ramps: the ability to suspend or tune systems rapidly if harm thresholds are crossed.
What the Two Incidents Teach
- Speed without alignment can cause harm. In Baltimore County, the system was fast; the handoff failed. Designing the tech is half the job; designing the workflow is the other half.
- Believability is a new attack surface. In Dorset, the images weren’t real, but they were credible enough to mobilise police. As generative tools improve, credibility becomes a resource attackers can cheaply mint — and responders must learn to triage.
- Public trust is the true rate-limiter. Every mistaken gun call, every prank-driven dispatch, marginally lowers confidence in both AI and policing. That trust is expensive to rebuild.
AI can make policing safer and more efficient — when it is accurate, well-governed, and embedded in robust procedures. It can also magnify fear and error when alerts outrun verification or when fabricated evidence hacks human instincts to protect loved ones.
The technology will not decide which path we take. Policies, training, procurement terms, and communications discipline will. If agencies can align those pieces, AI may become the quiet assistant that notices what humans miss and steps back when they are sure. If not, we should expect more sirens for crisp packets — and more 999 calls for intruders who were never there.