Skip to content

Practically Prompted #3: VPNs Top the App Charts After UK Age-Checks Kick In: What Does “Protecting Children” Justify?

This is the third in a trial blog series called “Practically Prompted” – an experiment in using large language models to independently select a recent, ethically rich news story and then write a Practical Ethics blog-style post about it. The text below is the model’s work, followed by some light human commentary. See this post for the motivations behind this experiment.

The original output and prompt used can be accessed here.

Below is the prompted post.


VPNs Top the App Charts After UK Age-Checks Kick In: What Does “Protecting Children” Justify?

On 25 July 2025, the UK began enforcing stringent age‑verification rules for pornography and other “harmful” content under the Online Safety Act. Within hours, VPN downloads spiked—some providers reported jumps of 500–1,400%. A parliamentary petition to repeal the Act has already gathered more than 280,000 signatures. Ofcom had warned platforms days earlier that “highly effective” age checks must be in place, naming services from Pornhub to Reddit.

At first glance, the ethical calculus looks simple: children shouldn’t be able to access pornography or self-harm forums, so platforms must stop them. But the means chosen—compulsory facial scans, credit-card checks, or government ID uploads—shift significant risks onto adults (and indeed older minors) who wish to access legal material without building a dossier of intimate preferences for governments, tech firms, or data brokers. The proportionality question looms: does preventing youthful exposure justify normalising intrusive identity checks for everyone?

The policy also invites a predictable cat‑and‑mouse game. If large numbers of adults immediately route around the system with VPNs, two uncomfortable options emerge: (a) accept that the law mostly inconveniences the rule‑abiding while the determined slip through, or (b) escalate—perhaps by blocking VPNs, criminalising circumvention, or compelling ISPs to perform deeper packet inspection. Each step ratchets up surveillance capacity that can later be redeployed for less defensible aims. That slippery-slope worry is not mere rhetoric; VPNs are precisely the tools citizens use to evade censorship in authoritarian states.

There is also an epistemic problem: “harm” is broad, and what counts as harmful content is partly a moral, not just psychological, judgement. The Act bundles pornography with material on eating disorders and self-harm, implying the same governance tool is apt for very different risk profiles. That conflation risks lazy policymaking: if age‑gating is your hammer, every distressing website becomes a nail. Ethically, we owe children more than blunt filters—we owe them evidence-based education, context, and supportive services.

Privacy advocates’ core objection is not a libertarian reflex against any child-protection measure; it is about data minimisation and purpose limitation. When a database exists that links real identities to sexual content consumption, the stakes are unusually high: think of outing, blackmail, or future regime change. The ethical test should be “least intrusive effective measure.” Could on‑device age estimation, anonymous token systems, or third‑party escrowed verification achieve the aim with less data risk? If yes, legislators choosing a heavier option bear the burden of argument.

The petition surge hints at another ethical layer: democratic legitimacy. If hundreds of thousands object within days, was public consultation adequate? Rapid post‑implementation backlash may indicate that the law’s practical implications were poorly communicated—or that citizens accept child protection in principle but reject this specific implementation. In liberal democracies, durable regulation often depends on a social licence; ignoring widespread circumvention can corrode respect for law more generally.

What should an ethicist recommend? First, separate ends from means. Protecting children is a widely shared end. The question is which mix of education, parental tools, platform design nudges, and proportionate verification minimises rights‑invasion while actually working. Second, build reciprocity and accountability into any surveillance expansion: if citizens must identify themselves, regulators and ministers should publish transparent, auditable logs of how that data is accessed and by whom. (Radical transparency for the powerful can partially offset privacy losses for the rest of us.) Third, include sunset clauses and independent audits: if after, say, two years the policy demonstrably fails (because VPN circumvention remains trivial) or produces disproportionate harms (data breaches, chilling effects), it should lapse without heroic legislative effort.

Finally, this episode is a case study in a broader trend: governments reaching for infrastructural identity checks to solve disparate online problems. Today it is porn; tomorrow it may be political speech dressed up as “misinformation.” Ethically evaluating such moves requires asking not just “Will it block the thing we dislike?” but “What institutional capacities does it normalise, who controls them, and how easily can they be repurposed?” The VPN charts may be the public’s first, messy answer.


Human commentary

I won’t comment much about the content of this post, except to say that so far as I can tell it accurately reports on the topic, and raises at least some of the key tensions between safety and privacy in this context.

Instead, I want to highlight that, if you look at the original output, you’ll notice the model engages in a kind of citation stuffing. It reuses the same 2-3 sources from the first paragraph (from the FT and Ofcom) over and over again, even in paragraphs where the content has no clear connection to them.

For instance, in the penultimate paragraph on what an ethicist would recommend, it runs through three distinct ideas and then cites those same initial news sources, rather than the philosophical or academic work where such proposals would have originated.

This points to a shift from the known problem of source hallucination to a different problem of citation misattribution. The source is real, but its connection to the claim is illusory, serving as a placeholder that gives a veneer of authority.

This is in part an artifact of the prompt used, which included the instruction, “Ensure you provide links to the main news story and any key sources used in the post.” Adding a clarification to “Ensure the sources used in a paragraph actually correspond to the content of that paragraph” seems to improve this issue, but I’m unsure how consistently it does so.

Regardless, it’s an important new dynamic to watch for: we’ve moved beyond only needing to check if a source is fake and now must verify if a cited source is hollow—a more subtle and difficult task for evaluators, especially when considering longer text outputs from these models.

Share on
Tags:

Join the conversation

Your email address will not be published. Required fields are marked *


Notify me of followup comments via e-mail. You can also subscribe without commenting.