← Back to Blog

    UK Online Safety Act: Safety Gains or Dystopian Drift?

    Eighteen months on, the UK's law stands at a crossroads between protection and overreach, is there a privacy preserving path forward?

    LegislationPublished · 45 min read· By Legal Analysis Team

    Evidence-based review per our 28-criteria methodology · affiliate disclosure

    Premium Research Content

    Continue reading this in-depth analysis on Substack

    Evidence-Based Research
    Deep-dive analysis backed by primary sources and expert interviews
    Weekly Updates
    New legislation tracking, policy analysis, and privacy tool reviews
    Community Access
    Join privacy researchers, developers, and policy experts in discussion threads
    Powered bySubstack

    The UK Online Safety Act 2023 (OSA) represents the most significant regulatory intervention in digital platform governance since the inception of the modern internet. Following a multi-year legislative process that began with the 2017 Internet Safety Strategy, the Act introduces comprehensive duties for user-to-user services, search engines, and pornography providers operating in the UK market. With children's duties live since 25 July 2025 and porn providers already under 'highly effective' age-assurance obligations, millions of UK users now hit identity checks before they hit play. Support for protecting children is real. So is the public's recoil at being asked to prove identity to view lawful content. The policy choice is no longer whether to regulate-but whether the UK builds safety through privacy-preserving engineering, or sleepwalks into a durable surveillance infrastructure.

    What the law now demands-and how we got here

    The OSA imposes systems-and-processes duties on user-to-user services and search, with the strongest protections for children. Pornography providers (Part 5) have been in scope since 17 January 2025; broader children's codes for user-to-user/search (Part 3) reached full force on 25 July 2025. Ofcom can fine up to 10% of global revenue; require a firm to appoint a 'skilled person'-an independent assessor who audits systems and reports to Ofcom, usually at the firm's expense; and-at the limit-disrupt business and payments for non-compliant services. In specified circumstances there is also senior management liability for failure to comply with enforcement notices relating to certain child-safety duties. The regime is technology-neutral in principle and phased in via codes, guidance, and secondary legislation. [1-4]

    Additional scope and transition notes

    Part 3 applies where a service has "links with the UK" (e.g., targeting UK users; significant UK user base; or material risk of significant harm in the UK) and excludes Schedule-1 carve-outs (e.g., email, SMS/MMS, one-to-one live aural calls, certain limited-functionality modules, provider-published comment threads, and news-publisher content). The legacy Video-Sharing Platform regime (Communications Act 2003, Part 4B) was repealed on 25 July 2025; from that date, relevant VSPs are regulated solely under the OSA (a transitional period ran from Jan 2024). Category registers must be maintained by Ofcom once thresholds are in force, with the averaging method based on mean UK MAU over the preceding six months. [2-3, 6-7, 9, 11, 32]

    Illegal-content judgments (operational standard)

    When judging illegality (or fraudulent advertising), providers may act where they have reasonable grounds to infer that all elements of the offence-including mental elements-are present, and that there are no reasonable grounds to infer a defence may succeed; special rules address bot-generated content (mens rea attaches to the controller). Ofcom issued Illegal-Content Judgements Guidance in Dec 2024. [1, 7]

    Free-expression safeguards (journalism/democratic content)

    Category 1 services must maintain layered protections for content of democratic importance, news-publisher and journalistic content, including a temporary must-carry rule requiring notice and an appeal route for recognised news publishers before removal/account actions-subject to carve-outs where hosting would incur civil/criminal liability or involve a relevant offence. Terms of Service must explain how journalistic content is identified and how expedited complaints are handled. [1, 3, 7, 32]

    Fraudulent advertising duties

    Separate duties require Category 1 user-to-user and Category 2A search services to operate proportionate systems to prevent, minimise dwell time, and swiftly remove fraudulent ads, with transparency about any proactive technologies used; Ofcom will publish a specialised code offering deemed-compliance steps. [1, 3, 7, 32]

    Funding and fees (QWR)

    Ofcom will part-fund the regime via industry fees. Providers above a Qualifying Worldwide Revenue (QWR) threshold must notify and pay an annual fee (with QWR aligned to service parts where regulated UGC, search content, or provider porn content may be encountered). Draft guidance sets out thresholds and notification mechanics; the Online Safety Act 2023 (Fees Notification) Regulations 2025 specify the notification content. [11, 32-33]

    Scope, duties & categorisation at a glance (Part 3, Part 5, thresholds)

    Part 3 - user-to-user & search (children's protections). Risk-assess illegal content and children's harms; implement proportionate safety measures via Ofcom Codes. For services likely to be accessed by children: complete children's risk assessments by 24 July 2025; apply Children's Codes from 25 July 2025. [2-3, 6-7]

    Part 5 - pornography providers (age assurance). Implement highly effective age assurance so pornographic content is not visible before or during checks; self-declaration is insufficient. In force since 17 January 2025. Part 5 covers "regulated provider pornographic content" (provider-published porn, not UGC), with text-only and certain text+emoji/GIF cases excluded. [2]

    Categorised services extra "big platform" duties:

    • Category 1 (very large U2U): >34m UK MAU + recommender, or >7m with recommender and forward/share.
    • Category 2A (search): >7m UK MAU (non-vertical).
    • Category 2B (U2U w/ DMs): >3m UK MAU and direct messaging.

    Threshold conditions were enacted by regulation in 2025; Ofcom is expected to publish a register once thresholds are in force. Averaging uses mean UK MAU across the prior six months. [3, 9]

    OSA roadmap (2017-2026) - linear timeline

    Ofcom's implementation: a case study in mission and mis-execution

    Ofcom's January 2025 age-assurance guidance defines 'highly effective' as accurate, robust, reliable and fair-and adds a hard rule: pornography must not be visible before or during checks. It lists methods capable of being highly effective (photo-ID matching, facial age estimation, open banking, MNO checks, credit-card checks, digital identity services) and rules out self-declaration. In theory, it is method-agnostic. In practice, early industry deployments have skewed to document uploads and server-side biometrics-precisely the approaches that create identity databases and cross-service linkability. [1-2]

    Powers at a glance

    Information notices and audits; 'skilled person' appointments for deep technical review; fines up to £18m or 10% of global revenue; business-disruption orders (e.g., payment/advertising); access-restriction orders (e.g., app stores, ISPs); and, where statutory tests are met, criminal exposure for senior managers who fail to comply with Ofcom enforcement on particular child-safety duties. Individuals can be liable where offences occur with their consent, connivance or neglect. Ofcom's Enforcement and Information-Powers guidance also set out investigation and settlement approaches. [2, 11, 34]

    Stakeholders who should be natural allies have sounded the alarm. The Internet Watch Foundation-no stranger to the harms the Act targets-warned that Ofcom's codes risk a 'technically feasible' escape hatch that invites platforms to do less to detect child sexual abuse material. Families bereaved by self-harm have accused the regulator of moving too slowly and too narrowly. Even the Government's own debate materials and Hansard exchanges show concern that crucial risks (e.g., livestreaming) were kicked to later phases, pushing enforceability into 2026. These are not fringe critics; they are canaries. [12]

    Meanwhile, Ofcom's expansion has been brisk. Headcount and programme spend have grown since 2024 as it stood up online-safety directorates and enforcement capacity. That footprint is inevitable for any scheme of this breadth-but regulatory capacity will not fix weak design choices. If the codes reward identity walls, the machine will scale identity walls. [11]

    Implementation snapshots: early cases and responses

    • Part 5 enforcement ramp-up: Ofcom opened enforcement programmes against dozens of pornographic sites for failing to implement 'highly effective' age assurance from July 2025. Public statements and "important dates" guidance flagged the March-July 2025 transition to live enforcement.
    • 4chan / 8kun / Gab: Ofcom issued information and compliance notices to fringe platforms accessible in the UK. In August 2025, 4chan and Kiwi Farms filed suit in U.S. federal court challenging Ofcom's jurisdiction and characterising the OSA as extra-territorial censorship; by mid-October, Ofcom warned 4chan it faced a £20k penalty plus £100-per-day fines for refusing to submit an illegal-harms risk assessment. The U.S. tech press and civil-liberties groups framed this as a sovereignty and free-speech clash. [26-27, 47]
    • File-sharing clean-up: A parallel Ofcom probe into file-sharing hosts led 1Fichier and Gofile to deploy perceptual hash-matching for CSAM, while services such as Krakenfiles and Nippydrive geoblocked UK users; new investigations continue against image hosts and age-verification providers that ignore data requests. [47]
    • Wikipedia: The Wikimedia Foundation sought judicial review of the Categorisation Regulations (risk of Category 1 designation). On 11 August 2025, the High Court dismissed the challenge but stressed Ofcom must shield Wikipedia from disproportionate duties; on 12 September the Foundation confirmed it would not appeal and would monitor how Ofcom applies the ruling. [28]
    • Steam adult-content gating: Valve now requires UK Steam accounts to register a credit card before viewing "mature" games or community hubs, pitching it as a privacy-preserving way to meet OSA duties—normalising hard proof-of-age inside a mainstream platform. [48]
    • Over-blocking of innocuous content: Risk-averse platforms went far beyond pornography. Reddit and Discord age-gated subforums about quitting smoking, menstruation and craft beer; Spotify required ID to watch certain music videos; GIFs of SpongeBob, Wikipedia articles and game mods were temporarily inaccessible; and the Vagina Museum reports that social-media filters misclassify educational posts on anatomy—especially when they depict women of colour or pubic hair. [37–40]

    Safety gains-and where the harms moved

    Two sets of facts can be true: (1) exposure of UK children to some categories of pornographic material has fallen; and (2) the net privacy cost for adults has risen sharply. Ofcom's approach has catalysed investment in moderation processes and transparency at scale; the Act's new offences have delivered early prosecutions. Yet the rollout has also normalised biometric prompts for lawful content and pushed marginal and community platforms to geo-block or exit the UK. Concentration increases, optionality shrinks, and 'compliance theatre' grows. [2, 6-7, 12]

    What changed in 2025

    Implementation patterns diverged: some platforms adopted privacy-first approaches (unlinkable credentials; on-device checks), others defaulted to ID uploads, and a tranche of smaller or volunteer-run services limited UK access. UX friction rose for lawful adult users, even as some child-safety outcomes improved. [2-3]

    The economics are crude: compliance is a fixed cost. Giants amortise; small sites suffocate. That is not inevitable—shared, privacy-preserving infrastructure can invert the cost curve—but it is the trajectory if ID upload remains the informal gold standard.

    Collateral damage and lost voices

    The Hamster Forum and other hobby sites have shut down because they cannot afford commercial age-verification services. Independent game creators on Itch.io saw entire author pages blocked when a single upload triggered an adult flag, severing access to their audiences. These outcomes illustrate how a one-size-fits-all compliance pattern squeezes small publishers and consolidates attention on the largest platforms. [43, 45]

    Systemic risk: identity-first by default

    The OSA's danger is not only what it bans; it is what it builds. Orwell warned of surveillance enforced by fear; Huxley, of control through comfort. Age-assurance systems that log lawful views and tie them to identity create infrastructure—data flows, legal levers, commercial incentives—ripe for function creep. Today: over-18 checks for porn. Tomorrow: 'prove your age to read' for knife-crime coverage; 'prove your location' for political content tagged as 'foreign'. None of this requires conspiracy. It requires only inertia, budgets, and dashboards.

    • Function creep: tools built for one harm expand to others.
    • Data concentration: ID vaults and biometric stores become single points of failure.
    • Technical lock-in: proprietary SDKs and compliance APIs entrench vendors and design choices.

    The remedy is not to abandon safety, but to bind it to privacy by design—hard-coded limits on identifiability, linkability, and observability—as UK GDPR and the Age-Appropriate Design Code already require. The technology exists. The question is whether the incentives do. [2]

    Algorithmic bias and exclusion

    Facial-age-estimation systems misidentify people of colour and disabled users, and millions of adults lack government ID or personal devices—locking them out when ID-first is the default. The Vagina Museum’s experience also shows how filters misclassify educational anatomy content. Without robust bias audits and diverse training data, the result is disproportionate harm to already-marginalised users. [37, 40]

    Privacy and security incidents

    Identity-first deployments amplify breach impact. A widely installed Chrome VPN extension was caught secretly capturing full-page screenshots and exfiltrating data—an example of how “workarounds” can also introduce surveillanceware. In October 2025 Discord confirmed a contractor compromise that exposed about 70,000 users’ government ID selfies and customer-support messages, despite the platform itself not being breached. Tom’s Guide further notes that stolen ID photos can be reused to open financial accounts, and some vendors may face cross-border data demands. Without strict data minimisation and on-device processing, age-assurance systems become attractive targets for hackers and state surveillance. [41, 42, 49]

    UK traditions of freedom and philosophical context

    The UK’s liberal tradition protects anonymous reading and robust debate. Mandatory ID checks, broad data retention and potential client-side scanning sit uneasily with Article 8 ECHR and common-law norms. The risk is cultural as well as technical: normalising identity gates for lawful content reshapes how Britons access information, chilling speech that requires pseudonymity—journalism, health, sexuality, politics. Campaigns by groups like Article 19 and Big Brother Watch frame this as a constitutional choice, not a mere engineering preference. [20–22]

    Privacy as a fundamental right—and the technical levers

    Under Article 8 ECHR, given effect domestically by the Human Rights Act 1998, public authorities must act compatibly with the right to respect for private life. Any interference must be lawful, pursue a legitimate aim, and be necessary in a democratic society—proportionality asks: is the measure suitable; are less intrusive alternatives available; and does it strike a fair balance? Regulators operating statutory schemes sit inside that constraint.

    UK GDPR and the Data Protection Act 2018 convert those values into engineering: data minimisation and purpose limitation (Art. 5); a lawful basis (Art. 6); and data protection by design and by default (Art. 25). For services 'likely to be accessed by children', the ICO's Age-Appropriate Design Code mandates privacy-protective defaults. In practice, privacy is preserved when systems minimise three properties: identifiability (can this action be tied to me?), linkability (can my actions across sites be joined up?), and observability (how much data must be revealed to perform the check?). PET-first age assurance reduces all three. [14–15]

    VPNs and the new detection playbook

    The Act does not ban VPNs. It does, however, create powerful incentives to detect and deter them. The Age Verification Providers Association (AVPA) has lobbied for a model that treats VPN use as a risk signal and urges platforms to profile behaviour—UK-daytime activity, UK-English locale, following mostly UK accounts, 'youthful' engagement patterns—and to escalate to mandatory ID or one-time GPS checks when confidence drops. In short: if an IP mask hides you, your behaviour should unmask you. That is a civil-liberties hazard masquerading as due diligence. [10]

    The logic also misreads threat models. VPN 'detection' disproportionately flags privacy-protective adults, journalists, and abuse survivors while remaining porous to determined evaders. It risks enshrining mass behavioural surveillance across the stack—from adtech telemetry to device-level location taps—without clear gains in child protection.

    Blending VPN heuristics with behavioural profiling remains easy to evade for motivated teens yet intrusive for everyone else—precisely the wrong privacy/safety trade. [10, 37]

    Encryption: the loaded gun on the table (and the EU's "chat control" push)

    The OSA's 'proactive technology' provisions, and Ofcom's ongoing 'additional safety measures' consultation, keep client-side scanning on the agenda—paused, but pointed. Technical consensus holds that scanning before encryption or inserting privileged scanning hooks is incompatible with end-to-end encryption (E2EE); sophisticated offenders route around it while the public inherits a universal interception surface. European human-rights jurisprudence is moving the same way: blanket decryption or its functional equivalent collides with Article 8. [4, 13]

    EU "chat control" (CSA detection orders)

    Parallel EU proposals (widely dubbed "chat control") would enable detection orders that, in practice, require scanning private messages—including E2EE contexts. Under Denmark's Council presidency (since 1 July 2025) compromise texts revived detection orders; a tentative vote target was set for mid-October (e.g., 14 Oct 2025). Civil-society groups and data-protection bodies argue such orders are incompatible with E2EE and fundamental rights. Political negotiations remain live into 2026. [24, 32–33]

    Reporting and technology notices (live levers)

    The Commencement No. 5 Regulations brought in CSEA reporting to the NCA from November 2025 for in-scope U2U services (now in effect). Separately, Ofcom may issue "use of technology" notices for terrorism/CSEA requiring accredited technology (with human moderation) or "best endeavours" to develop qualifying tech. These are the practical backdrops to any client-side-scanning debate. [1, 4, 32, 35]

    What to watch next (6–18 months) — with metrics to track

    • 20 Oct 2025: Additional Safety Measures consultation closes—interpretation of 'proactive technology' and encryption will crystallise. Track: responses count; final guidance stance on client-side scanning (explicitly disallowed / "when feasible"). [4]
    • Post-launch telemetry: AVPA members logged about 5.7 million age checks during the first OSA enforcement weekend; watch whether Ofcom’s next codes favour reusable credentials and livestream safeguards to avoid a “new cookie popup” backlash. [47]
    • Late 2025: First major enforcement actions likely—precedents on privacy-preserving vs identity-based compliance. Track: number/size of penalties; proportion citing biometric/image-retention failures; appeals outcomes. [3]
    • 2025–2026: Ofcom to publish the register of categorised services; further duties for Category 1/2A/2B. Track: how many services in each category; presence of non-profits (e.g., Wikipedia) on Category 1 list. [2–3]
    • Skilled-person reports: Deep audits will reveal the regulator's true priorities. Track: frequency of PET-first recommendations vs ID-first; time-to-remediation metrics. [1]
    • Potential litigation: Proportionality and encryption challenges (including Wikipedia follow-ups) could test Article 8 compatibility. Track: case filings, interim relief, statements on E2EE. [13]

    Transparency & fees metrics

    • Transparency cycle: number of transparency notices issued; split between core vs thematic metrics; completeness/quality of published reports. [32–34]
    • Fees/QWR regime: how many providers cross QWR threshold; timeliness/accuracy of notifications; Ofcom's final threshold decision and fee receipts. [11, 33]
    • Use-of-technology/CSEA: number of technology notices; CSEA reporting volumes post-go-live; average investigation duration and settlements. [1, 32, 35]

    The steelman for the other side

    Safety groups argue, with passion and data, that platforms 'catch less of the bad stuff' than they could; that livestreaming, group DMs and ephemeral formats amplify grooming risk; that the Act's outcomes should trump implementation niceties if that is what it takes to pull abuse content off the network. They are right to demand urgency and accountability. They are wrong to assume that identity infrastructure and client-side scanning will work in the hands of fallible governments and profit-maximising vendors. The trade is not 'privacy versus children'; it is 'security for everyone versus security theatre for no-one'. [12]

    Civil society and expert critiques – and industry's reply

    Civil-liberties groups have supported strong child-safety outcomes but warn that identity-first defaults entrench surveillance infrastructure and chill lawful speech; industry urges method-agnosticism but often deploys the highest-friction (and most invasive) options first to minimise compliance risk. The right test is equivalence: PET-first routes that meet Ofcom's "highly effective" standard should be treated as first-class compliance, not exotic exceptions. [20–22]

    Resource allocation critique. Several child-protection advocates argue the OSA diverts scarce resources from investigating actual abuse. If only a small share of referrals lead to arrests or safeguarding action, doubling down on ID checkpoints risks performance theatre over impact. A rights-preserving approach would prioritise funding for investigations and victim support while adopting PET-first access controls—targeting offenders, not everyone. [42]

    A working alternative: PET-first age assurance

    A privacy-preserving stack can meet Ofcom's highly effective bar without ID vaults or cross-service fingerprints:

    • Selective-disclosure credentials (W3C VCs, BBS+): banks/MNOs issue an 'over-18' attest during familiar KYC flows; users hold it locally in a wallet; platforms verify a zero-knowledge proof that never reveals identity or exact age. Each presentation is unlinkable across sites.
    • On-device age estimation: a local model performs a one-off check; only a binary flag leaves the device; no face images or templates transit to servers.
    • Short-lived tokens + revocation accumulators: break correlation over time while still allowing fraud response.
    • Governance: DPIAs; bias audits; no raw biometrics stored; independent security reviews.

    This is not speculative: standards and open implementations already exist (W3C Verifiable Credentials 2.0; IETF Privacy Pass), and early deployments show that the user journey can be as quick as a document upload. What is missing is regulatory preference signalling—codes and enforcement that treat PET-first as the default, not an exotic exception—something that Ofcom's technology-neutral guidance and UK GDPR's data-minimisation duties already permit in principle. [1–4, 18–19]

    How the main methods work (granular)

    1. ID upload (centralised). A user uploads an ID; OCR + selfie match; third-party returns an assertion. Risks: vault breach; template reuse; demographic bias; vendor lock-in.
    2. Facial age estimation (server). Image or embedding sent to verifier; returns ≥18 decision. Risks: PAD evasion; demographic differentials; server-side biometric surface.
    3. Facial age estimation (on-device). Local model; only a signed boolean leaves device; no central biometric store. Challenges: device diversity; certification.
    4. Bank/MNO "over-18" attest. Credential stating only over-18; issuer doesn't see where presented. Risks: issuer concentration; reuse linkability if poorly designed.
    5. Unlinkable age tokens (VC/SD-JWT + BBS+ / zk proofs). Zero-knowledge range proofs; revocation via privacy-preserving accumulators. Risks: wallet UX, verifier library quality.
    6. Short-lived tokens with revocation. Time-boxed tokens bound to device/session; blocklists via accumulators.
    7. Governance patterns. DPIAs; bias testing; no raw biometrics; independent reviews; non-identifying appeals.

    Comparison of age-assurance methods and privacy properties

    MethodIdentifiabilityLinkabilityObservabilityDPIA burdenTypical failure modes
    ID upload (centralised)HighHigh across servicesHighHighVault breach; reuse; OCR/face-match bias
    Facial age estimation (server)MediumMedium if token re-useMedium–HighMedium–HighPAD spoofing; demographic error rates
    Facial age estimation (on-device)LowLow (per session)LowMediumPAD tuning; model drift
    Bank/MNO 'over-18' attestMedium (issuer knows identity)Low at verifierLow–MediumMediumIssuer concentration; token reuse
    Unlinkable age token (VC/SD-JWT)Very LowVery Low (per relying party)Very LowLow–MediumRevocation design; RP UX; token theft window

    Implementation notes for meeting Ofcom's 'highly effective' bar

    Accuracy & robustness (ROC/AUC; FAR/FRR; PAD benchmarks); reliability & fairness (pinned models; disparate-impact tests); privacy by design (unlinkable proofs; no raw biometrics; key rotation; short token lifetimes); user experience (one-tap wallet proofs; no coercive ID fallback). [16–19]

    What the ICO expects—and coordination with Ofcom

    The ICO will judge any age-assurance scheme against UK GDPR: data minimisation, purpose limitation, a clear lawful basis, and data protection by design and by default. For services likely to be accessed by children, the Age-Appropriate Design Code requires privacy-by-default settings and limits on profiling. That aligns with Ofcom's outcome-based 'highly effective' standard. [14–15]

    Compliance in practice (micro-matrix)

    • Effectiveness (Ofcom): high confidence under-18s cannot normally access. ICO lens: necessity/proportionality. PET pattern: zk proofs + published accuracy.
    • Minimisation (Ofcom/ICO): only data needed to establish age. PET pattern: selective disclosure; on-device; no raw biometrics retained.
    • Transparency & redress (Ofcom): effective complaints/appeals. ICO lens: rights to information/rectification. PET pattern: non-identifying ticket IDs; auditable without user IDs.
    • Security (both): appropriate technical/organisational measures. PET pattern: key rotation; privacy-preserving revocation; PAD benchmarks.

    Standards: BSI PAS 1296; ISO/IEC 30107-3; W3C VCs; IETF Privacy Pass. [16–19]

    International context: OSA vs peers (uniform comparison)

    JurisdictionWho's covered (scope)What platforms must doEnforcement status & datesEncryption stance / risksNotable controversies
    UK (OSA)Porn providers (Part 5); U2U/search (Part 3); very large services categorisedPart 5: highly effective age assurance; Part 3: children's risk assessments & mitigations per CodesPart 5 live 17 Jan 2025; Part 3 children's protections 25 Jul 2025E2EE tension via "proactive tech" consultation; CSS risk debatedEnforcement on porn sites; notices to fringe platforms; Wikipedia JR dismissed
    EU (DSA + CSA proposal)DSA: VLOPs/VLOSE + all intermediaries systemically; CSA: all comms hosts/providersDSA: risk assessments, audits, transparency; no blanket AV. CSA (proposed): detection orders may imply client-side scanningDSA in force; CSA still negotiated (2025)CSA could undermine E2EE; regulators & civil society warning"Chat control" backlash; Council/Parliament divided
    USA (state laws)Porn sites (varies by state); some social-media youth access lawsAV often via ID/third-party; fines/safe-harbours differ by stateSCOTUS upheld TX HB 1181 (27 Jun 2025); patchwork expansionNo federal CSS mandate; adult-privacy burden from ID checksPlatform geoblocking; ongoing litigation
    Australia (SMMA + eSafety)Social media accounts for under-16s; broader online-safety powers"Reasonable steps" to prevent under-16 accounts; guidance on age-assurance pathwaysEnforceable 10 Dec 2025; guidance in developmentNo E2EE ban; pathway choices (on-device vs ID) under scrutinyImplementation details and proportionality tests in focus

    DSA/DMA note: EU DSA is systemic and risk-based (audits, ad transparency, recommender controls) and bans targeted ads to minors, but does not mandate age gates; the UK OSA uniquely couples age-gating with Ofcom Codes. The EU DMA is competition-focused (gatekeepers), relevant only insofar as interoperability/choice can reduce lock-in for verification stacks. [24, 36]

    Economic impact and market dynamics

    Compliance costs behave like fixed costs: large platforms amortise them; small services and community projects struggle. That creates pressure to geo-block the UK or to adopt the cheapest-to-integrate (often most invasive) verification vendor. A standardised, privacy-preserving stack—unlinkable credentials, on-device checks, shared open tooling—would reduce per-service cost, improve security, and avoid lock-in. Until codes and procurement preference signal that direction, the market will skew toward identity uploads and proprietary SDKs.

    Quantifying the cost. Typical quotes for small forums run into the low thousands annually (e.g., ~£2,400/year), and popular vendors’ entry plans start around US $250/month on annual contracts. For volunteer-run communities, that pricing is effectively a shutdown tax—hence the wave of UK geo-blocks in mid-2025. [43, 46]

    Traffic shifts and enforcement. Ofcom’s monitoring shows overall UK visits to pornography sites falling by roughly a third post-go-live, while Pornhub owner Aylo reports a 77% domestic drop and claims users are migrating to non-compliant rivals; Ofcom counters that the regime ends an “age-blind internet” and is escalating against holdouts. [47, 50]

    Incentives, not edicts: follow the money

    Age-verification vendors profit from mandates and recurrence; more checks mean more contracts. Platforms seek the 'safest' harbour in Ofcom's codes with the least integration risk—often document upload. Regulators accrue staff, remit and dashboards. None of these incentives align with data-minimising design unless the codes do. Ofcom's own expansion mirrors that misalignment: not malice—just a machine optimised for throughput over calibration. A competent regulator would prefer unlinkable proofs, penalise surveillance shortcuts, and publish rigorous equivalence tests so small services can comply without buying an ID vault. The fees/QWR regime underscores these incentives: who pays, and how much, now depends on how Ofcom counts the revenue tied to regulated surfaces. [11, 32–33]

    Civic action: bending the arc

    Three pressure points matter over the next 6–18 months:

    1. Consultation responses (due 20 October 2025): Submit evidence that PET-first approaches meet the 'highly effective' bar; document harms of identity defaults; quantify cost savings from shared PET infrastructure. [4]
    2. Parliamentary oversight: Use constituency channels to demand proportionality tests, cost transparency (including Ofcom headcount and budgets), and explicit PET preference in codes. [3]
    3. Support the watchdogs: Civil-society groups and technical organisations need resources to litigate disproportionate measures and build open PETs—e.g., Open Rights Group, Big Brother Watch, Article 19. [20–22]

    The Open Rights Group’s campaign—“Tell your MP: The Online Safety Act isn’t working”—highlights wrongful censorship, restrictions on teens’ expression, hand-off of data to unregulated vendors and the ease with which young people bypass the law using VPNs; it provides a one-click route to contact MPs. [44]

    Action toolkit

    Template — Ofcom consultation submission

    Title: PET-first age assurance meets 'highly effective' while minimising surveillance risk

    Summary: We support the OSA's objectives and propose that Ofcom explicitly prefer privacy-preserving methods that meet the highly effective bar without identity sprawl: selective-disclosure credentials with unlinkable proofs and on-device age estimation. These approaches minimise identifiability, linkability, and observability, aligning with UK GDPR and the ICO's Age-Appropriate Design Code.

    Bottom line

    Eighteen months in, evidence of over-blocking, demographic bias, high small-site costs and easy circumvention suggests the current path erodes free expression, anonymity and privacy without reliably stopping determined offenders.

    Eighteen months in, the OSA has achieved real safety gains and created real surveillance risk. Ofcom's choices—what it prefers, what it tolerates, where it enforces—will decide which weighs more. A rights-preserving path exists: PET-first age assurance; encryption kept whole; design duties over device scanners; transparency and accountability over identity walls. The country that gave the world modern privacy law does not need to choose between children and civil liberties. It needs to choose competent regulation.

    Glossary of key terms

    • Age assurance (AA): Techniques to estimate or verify a user's age (not necessarily identity).
    • Client-side scanning: Scanning user content on their device before/after encryption; widely seen as incompatible with E2EE.
    • E2EE (end-to-end encryption): Only sender and receiver can read messages; services cannot.
    • Identifiability / Linkability / Observability: Can the action be tied to me? Can actions across sites be linked? How much data must I reveal?
    • Ofcom: UK regulator implementing and enforcing the OSA.
    • PETs (privacy-enhancing technologies): Methods like zero-knowledge proofs that minimise personal-data exposure.
    • Selective-disclosure credentials: Cryptographic credentials proving an attribute (e.g., "over 18") without revealing identity.
    • VLOP/VLOSE (EU DSA): Very large platforms/search engines with extra systemic-risk duties.
    • Qualifying Worldwide Revenue (QWR): Revenue referable to parts of a service where regulated UGC, search content, or provider porn content may be encountered; used for fee liability. [11, 33]

    References

    1. [1]4chan et al. (2025) '4chan & Kiwi Farms v. Ofcom (U.S. filing)', Court Filing. Available at: https://www.courtlistener.com/ (Accessed: 21 January 2026).
    2. [2]AP News (2025) 'Supreme Court upholds Texas age-verification law', Associated Press. Available at: https://apnews.com/article/supreme-court-texas-age-verification-porn (Accessed: 21 January 2026).
    3. [3]Article 19 (2025) 'OSA - Freedom of expression analysis', Article 19. Available at: https://www.article19.org/resources/uk-online-safety-act/ (Accessed: 21 January 2026).
    4. [4]ArtsProfessional (2025) 'Online Safety Act could lead to censorship of educational posts', ArtsProfessional. Available at: https://www.artsprofessional.co.uk/ (Accessed: 21 January 2026).
    5. [5]Australian Government (2025) 'Social media minimum age - in force 10 Dec 2025', eSafety Commissioner. Available at: https://www.esafety.gov.au/ (Accessed: 21 January 2026).
    6. [6]AVN (2025) 'UPDATED: Aylo Accuses Ofcom of Ineffective AV Enforcement', AVN. Available at: https://avn.com/ (Accessed: 21 January 2026).
    7. [7]AVPA (2025) 'Position papers on age assurance, VPN risk, circumvention', Age Verification Providers Association. Available at: https://avpassociation.com/ (Accessed: 21 January 2026).
    8. [8]BBC News (2025) 'ID photos of 70,000 users may have been leaked, Discord says', BBC News. Available at: https://www.bbc.co.uk/news/ (Accessed: 21 January 2026).
    9. [9]Big Brother Watch (2025) 'Stop the Scan / OSA resources', Big Brother Watch. Available at: https://bigbrotherwatch.org.uk/campaigns/stop-the-scan/ (Accessed: 21 January 2026).
    10. [10]Biometric Update (2025) 'Is age verification killing porn site traffic? Aylo says yes, AVPA says no', Biometric Update. Available at: https://www.biometricupdate.com/ (Accessed: 21 January 2026).
    11. [11]BSI (2018) 'PAS 1296:2018 - Online age checking: Code of practice', British Standards Institution. Available at: https://www.bsigroup.com/en-GB/standards/pas-1296/ (Accessed: 21 January 2026).
    12. [12]CyberInsider (2025) 'Chrome VPN extension with 100k installs screenshots all sites users visit', CyberInsider. Available at: https://cyberinsider.com/ (Accessed: 21 January 2026).
    13. [13]DSIT (2025) 'Online Safety Act: explainer', Department for Science, Innovation and Technology. Available at: https://www.gov.uk/government/publications/online-safety-act-explainer (Accessed: 21 January 2026).
    14. [14]EFF (2025) 'Americans, Be Warned: Lessons From Reddit's Chaotic UK Age Verification Rollout', Electronic Frontier Foundation. Available at: https://www.eff.org/deeplinks/2025/08/americans-be-warned-lessons-reddits-chaotic-uk-age-verification-rollout (Accessed: 21 January 2026).
    15. [15]EFF (2025) 'Blocking access to 'harmful' content will not protect children online', Electronic Frontier Foundation. Available at: https://www.eff.org/deeplinks/ (Accessed: 21 January 2026).
    16. [16]European Commission (2024) 'EU DMA/DSA official materials', European Commission. Available at: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (Accessed: 21 January 2026).
    17. [17]European Court of Human Rights (2024) 'Podchasov v. Russia, App. no. 33696/19', ECHR. Available at: https://hudoc.echr.coe.int/eng?i=001-230854 (Accessed: 21 January 2026).
    18. [18]European Union (2022) 'Digital Services Act (Regulation 2022/2065)', EUR-Lex. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R2065 (Accessed: 21 January 2026).
    19. [19]GamesIndustry.biz (2025) ''We believe these restrictions harm creative expression' — reaction to the OSA', GamesIndustry.biz. Available at: https://www.gamesindustry.biz/ (Accessed: 21 January 2026).
    20. [20]House of Commons Library (2025) 'Implementation of the Online Safety Act 2023 - Debate Pack', UK Parliament. Available at: https://commonslibrary.parliament.uk/research-briefings/cdp-2025-0044/ (Accessed: 21 January 2026).
    21. [21]ICO (2020) 'Age Appropriate Design Code & Data protection by design and by default guidance', Information Commissioner's Office. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-code/ (Accessed: 21 January 2026).
    22. [22]IETF (2024) 'Privacy Pass Architecture - RFC 9576', Internet Engineering Task Force. Available at: https://datatracker.ietf.org/doc/rfc9576/ (Accessed: 21 January 2026).
    23. [23]Ipsos (2025) 'Britons back Online Safety Act's age checks; sceptical of effectiveness; unwilling to share ID', Ipsos. Available at: https://www.ipsos.com/en-uk/ (Accessed: 21 January 2026).
    24. [24]ISO (2023) 'ISO/IEC 30107-3 - Biometric PAD - Testing and reporting', International Organization for Standardization. Available at: https://www.iso.org/standard/79520.html (Accessed: 21 January 2026).
    25. [25]IWF (2024) 'Annual Report 2024 - Data Insights', Internet Watch Foundation. Available at: https://www.iwf.org.uk/annual-report-2024/ (Accessed: 21 January 2026).
    26. [26]New Statesman (2025) 'Big Tech is the only winner of the Online Safety Act', New Statesman. Available at: https://www.newstatesman.com/ (Accessed: 21 January 2026).
    27. [27]Ofcom (2024) 'Protecting people from illegal harms online (Statement series incl. Vols 1-3)', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/protecting-people-from-illegal-content-online (Accessed: 21 January 2026).
    28. [28]Ofcom (2025) 'Age assurance guidance for services publishing pornographic content (Part 5)', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/protecting-children/age-assurance-and-part-5 (Accessed: 21 January 2026).
    29. [29]Ofcom (2025) 'Additional Safety Measures: Draft guidance & consultation', Ofcom. Available at: https://www.ofcom.org.uk/consultations-and-statements/category-2/additional-online-safety-measures (Accessed: 21 January 2026).
    30. [30]Ofcom (2025) 'Protection of children online: Codes of practice laid in Parliament', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/protecting-children/childrens-safety-codes-of-practice (Accessed: 21 January 2026).
    31. [31]Ofcom (2024) 'Illegal content codes and guidance (hub)', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content (Accessed: 21 January 2026).
    32. [32]Ofcom (2025) 'Roadmap to regulation; Important dates for compliance', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/information-for-industry/roadmap-to-regulation (Accessed: 21 January 2026).
    33. [33]Ofcom (2025) 'Notices to 4chan and 8kun regarding OSA compliance', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/information-for-industry/enforcement (Accessed: 21 January 2026).
    34. [34]Ofcom (2025) 'Online Safety Act 2023 (Fees Notification) Regulations 2025; QWR consultation', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/information-for-industry/fees (Accessed: 21 January 2026).
    35. [35]Ofcom (2025) 'Final guidance on information-gathering and enforcement; Transparency Guidance', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/information-for-industry/guidance (Accessed: 21 January 2026).
    36. [36]Ofcom/DSIT (2025) 'Parliamentary materials on scope, registers, temporary must-carry', UK Parliament. Available at: https://www.parliament.uk/ (Accessed: 21 January 2026).
    37. [37]Open Rights Group (2025) 'How to Fix the Online Safety Act: A Rights-First Approach', Open Rights Group. Available at: https://www.openrightsgroup.org/publications/how-to-fix-the-online-safety-act/ (Accessed: 21 January 2026).
    38. [38]Open Rights Group (2025) 'Tell your MP: The Online Safety Act isn't working', Open Rights Group. Available at: https://www.openrightsgroup.org/campaigns/ (Accessed: 21 January 2026).
    39. [39]PC Gamer (2025) 'Steam users in the UK who want 'mature game content' must now register a credit card', PC Gamer. Available at: https://www.pcgamer.com/ (Accessed: 21 January 2026).
    40. [40]Persona (2025) 'Pricing', Persona. Available at: https://withpersona.com/pricing (Accessed: 21 January 2026).
    41. [41]Reddit (2025) 'Age verification in the UK', Reddit Help. Available at: https://support.reddithelp.com/hc/en-us/articles/age-verification-uk (Accessed: 21 January 2026).
    42. [42]Tom's Guide (2025) 'The top 3 cybersecurity risks posed by the Online Safety Act', Tom's Guide. Available at: https://www.tomsguide.com/ (Accessed: 21 January 2026).
    43. [43]U.S. Supreme Court (2025) 'Free Speech Coalition, Inc. v. Paxton (No. 23-1122)', Supreme Court of the United States. Available at: https://www.supremecourt.gov/opinions/24pdf/23-1122_6537.pdf (Accessed: 21 January 2026).
    44. [44]UK Parliament (2023) 'Online Safety Act 2023 (c. 50)', UK Legislation. Available at: https://www.legislation.gov.uk/ukpga/2023/50/contents/enacted (Accessed: 21 January 2026).
    45. [45]UK Parliament (2025) 'OSA 2023 (Category 1, 2A, 2B Threshold Conditions) Regulations 2025', UK Legislation. Available at: https://www.legislation.gov.uk/uksi/2025/ (Accessed: 21 January 2026).
    46. [46]UK Parliament (2018) 'Data Protection Act 2018', UK Legislation. Available at: https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted (Accessed: 21 January 2026).
    47. [47]UK Parliament (2025) 'Commencement No. 5 Regulations 2025 (CSEA reporting)', UK Legislation. Available at: https://www.legislation.gov.uk/ (Accessed: 21 January 2026).
    48. [48]UK Parliament Petitions (2025) 'Repeal the Online Safety Act (petition 722903)', UK Parliament. Available at: https://petition.parliament.uk/petitions/722903 (Accessed: 21 January 2026).
    49. [49]W3C (2024) 'Verifiable Credentials Data Model 2.0', World Wide Web Consortium. Available at: https://www.w3.org/TR/vc-data-model-2.0/ (Accessed: 21 January 2026).
    50. [50]Wikimedia Foundation (2025) 'Updates on OSA Categorisation challenge; High Court judgment', Wikimedia Foundation. Available at: https://wikimediafoundation.org/news/ (Accessed: 21 January 2026).

    ProtonVPN

    Most transparent VPN for privacy

    Get Deal