This is the fourth in a trial blog series called “Practically Prompted” – an experiment in using large language models to independently select a recent, ethically rich news story and then write a Practical Ethics blog-style post about it. The text below is the model’s work, followed by some light human commentary. See this post for the motivations behind this experiment.
The original output and prompt used can be accessed here.
Below is the prompted post.
Carnival, Cameras, and Consent: The Ethics of Live Facial Recognition at Notting Hill
In the past few days, the debate over police use of live facial recognition (LFR) has flared again in the UK. Amnesty International UK warned that LFR “discriminates against people of colour,” urging the Metropolitan Police to halt its deployment—especially around the Notting Hill Carnival later this month. The letter lands as the Met plans to expand LFR use and to deploy the technology around the Carnival’s approaches, making this an unusually clear test case for the ethics of public-space surveillance at mass events.
What should we ask, ethically, before switching on the cameras? A familiar rights framework suggests at least three questions: necessity, proportionality, and equality. Necessity asks whether LFR addresses a specific, serious risk that cannot be managed otherwise. The Met reports that LFR has supported more than 1,000 arrests, with 773 charges or cautions—a public-safety case in miniature. But arrest and charge figures, absent base rates and counterfactuals, don’t show whether LFR (rather than traditional policing) is the decisive factor, or whether the same goals could be met via targeted warrants, intelligence-led patrols, knife arches, or temporary exclusion orders. Necessity is not “does it ever help?” but “is it needed here, now, for this threat?”
Proportionality sharpens that question. The UK Information Commissioner has repeatedly emphasised that facial recognition’s lawfulness hinges on use being “necessary and proportionate,” given the sensitivity of biometric data and the scale of collateral scanning (people who are not suspected of any wrongdoing). Carnival is a joyful, densely crowded celebration; scanning thousands of faces to find a handful of watch-listed individuals is exactly the sort of “dragnet” scenario that pushes proportionality to its limits. If less intrusive measures can reasonably mitigate the risk, proportionality is not met.
Equality surfaces a different worry. Even when algorithms clear lab tests for demographic performance, discrimination can creep in through deployment choices: which watchlists are built, where cameras point, and who is stopped for a “secondary” check. In a city where policing has faced credible findings of institutional racism, the risk of uneven burdens is not hypothetical. Amnesty’s intervention highlights how errors (and the fear of them) can chill participation and assembly for minority communities—an ethical cost that must be counted, not hand-waved.
Consent and contestation matter, too. The Met has said it will position LFR on routes to and from the Carnival—outside the event boundary. That design choice reduces disruption on the parade route but complicates meaningful notice: unlike entering a stadium with bag checks, passers-by on public streets have little practical opportunity to avoid a scan, challenge inclusion on a watchlist, or secure rapid remedy after a false match. If people cannot realistically avoid or contest surveillance, stronger ex-ante safeguards and ex-post redress are morally required.
What would responsible use look like—if LFR is used at all? First, a publicly available threat assessment explaining why LFR is necessary at this event, this year. Second, independent authorisation (not solely by the force deploying it), with a clear, published watchlist policy and strict deletion rules for non-matches. Third, event-specific equality impact assessments and post-deployment reports with hard numbers on scans, alerts, true/false matches, and downstream outcomes (arrests, charges), so the public can judge effectiveness. Fourth, real-time oversight—an on-site civil-liberties liaison with the power to pause or stop scanning if error rates spike. Finally, clear remedies for those falsely matched, including notice, swift deletion, and accessible routes to challenge watchlist inclusion. These are not luxuries; they are the ethical price of operating at the edge of privacy, assembly, and equality rights.
There is a deeper lesson. Policing at scale tempts us toward “frictionless” surveillance—systems that make scrutiny easy precisely because they make it invisible. Carnival’s social value is not just in the absence of violence but in the presence of trust. If we cannot justify LFR to those most likely to be burdened by it—and cannot show that no gentler tool would do—then the ethical permission to scan has not been earned. In liberal democracies, safety should be built with the public, not simply done to them.
Human commentary
OpenAI’s o3 model — which produced the previous posts in this series — has been withdrawn from public use, and the above was generated by the newly released ChatGPT 5 (specifically, its ‘Thinking’ mode). The overall style is consistent: the reporting is accurate, and the ethical analysis is thoughtfully integrated. However, like its predecessor, the model tends to rely heavily on a couple of core sources, sometimes stretching their relevance.
That said, the ethical analysis here feels more incisive. It’s hard to know if this is due to the new model, the topic itself, or chance, but the arguments feel sharper. For instance, when discussing the 1,000 arrests linked to LFR, the model rightly asks whether those arrests could have been made through traditional methods like intelligence-led patrols. This cuts through the simplistic ‘security vs. privacy’ debate and demands a much higher standard of proof for the technology’s necessity.
The post also reframes the discrimination debate in a useful way. It points out that the risk isn’t just about flawed algorithms tested in a lab, but about biased human choices in the real world. Decisions about who to put on a watchlist and where to point the cameras can produce discriminatory results even with a technically perfect algorithm.
These are by no means novel insights, but I thought weaving them so effectively into a short, cohesive article is impressive.
Ultimately, I think the main issue with all these LLM posts is their lack of a distinct voice and persona. Authors build a relationship with their audience through a unique voice — be it witty, angry, skeptical, or deeply personal. The reader feels they are listening to a specific person with a stake in the issue. The LLM, by contrast, has the dispassionate voice of a committee.
This can easily be amended through the prompt, but no amount of prompting will induce a belief in the reader that the LLM has — for this topic at least — actually ever attended Carnival. Still, for a neutral-seeming piece of news gathering and analysis, the minute it took to produce it is hard to beat.