Six months after the Online Safety Act's children's protections went live, Ofcom has moved from guidance to enforcement. The regulator has issued its first major fines—£1 million against AVS Group Ltd, £50,000 against Itai Tech Ltd—and expanded investigations into platforms including 4chan. Meanwhile, the industry fees regime came into force on 11 December 2025, requiring providers with qualifying worldwide revenue above £250 million to pay annual fees. The "Additional Safety Measures" consultation closed on 20 October, crystallising the encryption debate; new guidance on women and girls' safety was finalised in November. Yet privacy incidents continue: Discord's contractor breach exposed 70,000 users' ID selfies; VPN detection schemes proliferate; and small sites face shutdown costs that favour Big Tech. This update tracks enforcement patterns, market responses, and the critical question: will the UK build safety through privacy-preserving engineering, or cement identity-first surveillance infrastructure?
Executive summary: six months of enforcement in review
Since the Online Safety Act's children's protections went live on 25 July 2025, Ofcom has shifted from guidance to active enforcement. The regulator has issued its first substantial fines, expanded investigations, and finalised new guidance—while privacy incidents and market concentration raise questions about whether the regime is achieving its safety goals without eroding fundamental rights.
Key developments (July–December 2025)
- Enforcement actions: AVS Group Ltd fined £1 million (4 Dec) for inadequate age verification across 18 sites; Itai Tech Ltd fined £50,000 (20 Nov) for failing to implement age assurance; 4chan investigation expanded with £20k penalty for non-compliance with information requests.
- Industry fees: Fees regime came into force 11 December 2025; providers with QWR above £250m must notify and pay annual fees; four-month notification window for 2026/27 charging year.
- Additional Safety Measures consultation: Closed 20 October 2025; responses will shape Ofcom's stance on "proactive technology" and encryption compatibility.
- Women and girls safety guidance: Finalised 25 November 2025; assessment of provider implementation planned for mid-2027.
- Privacy incidents: Discord contractor breach exposed 70,000 users' government ID selfies (Oct 2025); VPN detection schemes proliferate; identity-first defaults create attractive breach targets.
- Market impacts: Small sites face shutdown costs (£2,400+ annually); traffic shifts to non-compliant rivals; Big Tech consolidates market share.
Critical questions
Six months in, three questions define the regime's trajectory:
- Will enforcement favour privacy-preserving methods? Early fines target inadequate age assurance, but do not explicitly prefer PET-first approaches over identity uploads. Ofcom's "Additional Safety Measures" guidance will signal whether the regulator treats unlinkable credentials and on-device checks as first-class compliance.
- Can small sites afford compliance? Typical quotes run into thousands annually; volunteer-run communities face shutdown costs. Shared PET infrastructure could invert the cost curve, but codes and procurement preferences have not yet signalled that direction.
- Will encryption survive? The "proactive technology" debate remains live. Client-side scanning proposals risk undermining end-to-end encryption for everyone, while sophisticated offenders route around detection.
Premium Research Content
Continue reading this in-depth analysis on Substack
Major enforcement actions: fines, penalties, and compliance orders
Ofcom's enforcement has moved from warnings to substantial penalties. The regulator has issued its first major fines, expanded investigations, and signalled that non-compliance will carry real costs—while questions remain about whether enforcement patterns favour privacy-preserving compliance or identity-first defaults.
AVS Group Ltd: £1 million fine (4 December 2025)
On 4 December 2025, Ofcom fined AVS Group Ltd £1 million for inadequate age verification measures across its 18 adult websites. An additional £50,000 fine was imposed for failing to respond to information requests. The company was given 72 hours to implement more effective age assurance systems or face further daily fines of £1,000 (Ofcom, 2025).
Context: AVS Group operates multiple adult sites, making it a high-profile target for Part 5 enforcement. The fine represents Ofcom's largest penalty to date under the Online Safety Act, signalling that the regulator will pursue substantial penalties for non-compliance.
Compliance requirements: The 72-hour deadline requires AVS Group to implement "highly effective" age assurance—but Ofcom's guidance does not explicitly prefer privacy-preserving methods. The company could comply via document uploads (creating identity databases) or via PET-first approaches (unlinkable credentials, on-device checks). Enforcement patterns will signal which path Ofcom favours.
Itai Tech Ltd: £50,000 fine (20 November 2025)
On 20 November 2025, Itai Tech Ltd, operator of a nudification site, was fined £50,000 for failing to implement highly effective age assurance measures to protect children from accessing pornographic content. An additional £5,000 penalty was imposed for non-compliance with a statutory information request (Ofcom, 2025).
Significance: Itai Tech's fine demonstrates that Ofcom will enforce Part 5 duties against smaller operators, not just large platforms. The penalty also highlights the challenge for sites that cannot afford commercial age-verification services—raising questions about whether the regime creates a "shutdown tax" for small publishers.
4chan: expanded investigation and £20k penalty
Ofcom expanded its investigation into 4chan Community Support LLC to assess compliance with child protection duties, particularly regarding age assurance systems. This follows a previous £20,000 fine for failing to comply with a statutory information request (Ofcom, 2025).
Background: In August 2025, 4chan and Kiwi Farms filed suit in U.S. federal court challenging Ofcom's jurisdiction and characterising the OSA as extra-territorial censorship. By mid-October, Ofcom warned 4chan it faced a £20k penalty plus £100-per-day fines for refusing to submit an illegal-harms risk assessment. The expanded investigation signals that Ofcom will pursue enforcement against platforms that resist compliance, even when they challenge jurisdiction.
Implications: 4chan's case tests whether Ofcom can enforce the OSA against platforms that operate primarily outside the UK but are accessible to UK users. The outcome will shape how platforms respond to Ofcom's jurisdiction claims—and whether enforcement creates incentives for platforms to geo-block UK users rather than comply.
Enforcement patterns and privacy implications
Early enforcement actions target inadequate age assurance, but do not explicitly prefer privacy-preserving methods. Fines focus on whether platforms have implemented "highly effective" age assurance—not on whether they have minimised identifiability, linkability, and observability. This creates a risk that enforcement patterns will favour identity-first defaults (document uploads, server-side biometrics) over PET-first approaches (unlinkable credentials, on-device checks).
What to watch: Ofcom's "Additional Safety Measures" guidance, expected in early 2026, will signal whether the regulator treats PET-first methods as first-class compliance. If guidance favours identity uploads, enforcement patterns will entrench surveillance infrastructure; if it prefers privacy-preserving methods, the regime could achieve safety goals without eroding fundamental rights.
Industry fees regime: who pays and how much
On 11 December 2025, the industry fees regime came into force. Providers with qualifying worldwide revenue (QWR) above £250 million must notify Ofcom and pay annual fees—creating a new revenue stream for the regulator while raising questions about how fees are calculated and whether they create barriers for smaller platforms.
Fee structure and thresholds
The Online Safety Act 2023 (Fees Notification) Regulations 2025 specify that providers above a QWR threshold must notify and pay an annual fee. Draft guidance sets out thresholds and notification mechanics; the regime came into force on 11 December 2025 (UK Government, 2025).
QWR calculation: Qualifying Worldwide Revenue is aligned to service parts where regulated user-generated content, search content, or provider pornographic content may be encountered. The averaging method uses mean UK monthly active users (MAU) over the preceding six months. This means platforms must calculate revenue attributable to regulated surfaces—a complex exercise that may require legal and accounting support.
Notification window: A four-month notification window is in place for relevant platforms to submit their revenue data to Ofcom for the 2026/27 charging year. Providers must notify Ofcom by 11 April 2026 if they meet the QWR threshold.
Fee amounts and impact
Ofcom has not yet published final fee amounts, but the regime will part-fund the regulator's online safety work. Fees are calculated based on QWR, creating a progressive structure where larger platforms pay more. However, the £250 million threshold means that many mid-sized platforms will be exempt—while questions remain about whether fees create barriers for smaller platforms that cannot afford compliance costs.
Market concentration risk: If fees create fixed costs that large platforms can amortise but small platforms cannot, the regime could accelerate market concentration. Large platforms may absorb fees as a cost of doing business, while small sites face shutdown costs that favour Big Tech.
Transparency and accountability
The fees regime raises questions about transparency and accountability. How much revenue will Ofcom collect? How will fees be used? Will fee-paying platforms receive preferential treatment in enforcement? These questions will shape public trust in the regime—and whether fees are seen as a legitimate cost recovery mechanism or a revenue-raising exercise.
What to watch: Ofcom's publication of final fee amounts and fee-paying platform lists will signal the regime's scale and impact. If fees are substantial and concentrated among a small number of platforms, the regime may create dependencies that raise questions about regulatory capture.
Additional Safety Measures consultation: encryption debate crystallises
The "Additional Safety Measures" consultation closed on 20 October 2025, crystallising the encryption debate. The consultation asked whether platforms should use "proactive technology" to detect child sexual abuse material (CSAM) and terrorism content—raising questions about whether such measures are compatible with end-to-end encryption (E2EE).
Consultation scope and responses
The consultation sought views on whether platforms should use "proactive technology" to detect harmful content, including CSAM and terrorism material. The consultation asked about compatibility with E2EE, proportionality, and technical feasibility. Responses closed on 20 October 2025; Ofcom is expected to publish final guidance in early 2026 (Ofcom, 2025).
Key questions: The consultation asked whether client-side scanning (scanning content on users' devices before encryption) is compatible with E2EE. Technical consensus holds that scanning before encryption or inserting privileged scanning hooks is incompatible with E2EE—while sophisticated offenders route around detection, the public inherits a universal interception surface.
Technical consensus and civil society response
Technical experts, civil society groups, and data protection bodies have argued that "proactive technology" proposals are incompatible with E2EE and fundamental rights. The European Court of Human Rights has held that blanket decryption or its functional equivalent collides with Article 8 ECHR (right to respect for private life) (European Court of Human Rights, 2024).
Industry response: Major platforms including Signal, WhatsApp, and Element have warned that client-side scanning would undermine E2EE and create surveillance infrastructure. The Open Rights Group, Big Brother Watch, and Article 19 have called for Ofcom to explicitly rule out measures that undermine encryption.
Government pressure and DSIT letter
On 12 November 2025, the Secretary of State for Science, Innovation and Technology, Liz Kendall, expressed disappointment in delays to the implementation of additional duties on categorised services. She emphasized the need for timely enforcement of the Online Safety Act to protect users—signalling government pressure for Ofcom to move quickly on "proactive technology" guidance (Department for Science, Innovation and Technology, 2025).
Implications: Government pressure may push Ofcom toward measures that undermine encryption, despite technical consensus and civil society opposition. The outcome will shape whether the UK maintains E2EE or creates surveillance infrastructure that undermines privacy for everyone.
What to watch: early 2026 guidance
Ofcom's final guidance, expected in early 2026, will crystallise the encryption debate. If guidance explicitly disallows client-side scanning, the regime may preserve E2EE; if it allows "proactive technology" when "technically feasible," the regime may create incentives for platforms to undermine encryption. The outcome will shape privacy and security for millions of UK users.
Women and girls safety guidance: new duties take shape
On 25 November 2025, Ofcom finalised guidance aimed at improving online safety for women and girls. The guidance sets out new duties for platforms to address harms including intimate image abuse, cyberflashing, and online harassment—while raising questions about how platforms will implement these duties and whether they will create new surveillance risks.
Guidance scope and duties
The guidance requires platforms to assess risks to women and girls, implement proportionate safety measures, and provide effective reporting and redress mechanisms. An assessment of how providers are implementing these measures is planned for mid-2027 (Ofcom, 2025).
Key harms addressed: The guidance focuses on intimate image abuse (non-consensual sharing of intimate images), cyberflashing (unsolicited sexual images), and online harassment. Platforms must assess risks, implement mitigations, and provide reporting mechanisms—creating new compliance obligations beyond age assurance.
Implementation challenges
Implementing women and girls safety duties raises questions about how platforms will detect and respond to harms. Will platforms use automated content moderation? Will they require identity verification for reporting? Will they create new surveillance infrastructure to detect harmful content?
Privacy implications: If platforms implement women and girls safety duties through identity-first approaches (requiring ID verification for reporting, storing user data for moderation), the regime may create new surveillance risks. Privacy-preserving approaches (anonymous reporting, on-device content detection) could achieve safety goals without eroding fundamental rights.
Mid-2027 assessment
Ofcom plans to assess provider implementation in mid-2027—giving platforms approximately 18 months to implement new duties. The assessment will signal whether platforms are meeting their obligations and whether implementation patterns favour privacy-preserving or identity-first approaches.
What to watch: Platform implementation patterns will shape whether women and girls safety duties create new surveillance infrastructure or achieve safety goals through privacy-preserving engineering. Early signals from platform announcements and technical choices will indicate the direction of travel.
Platform responses: adaptation, exit, and circumvention
Platforms have responded to Online Safety Act enforcement through adaptation, exit, and circumvention. Some platforms have implemented age assurance; others have geo-blocked UK users; still others have deployed workarounds that raise questions about effectiveness and privacy.
Compliance adaptations
Many platforms have implemented age assurance to comply with Part 5 duties. However, implementation patterns vary widely—from document uploads (creating identity databases) to privacy-preserving methods (unlinkable credentials, on-device checks). Early deployments have skewed toward identity-first defaults, raising questions about whether the regime is creating surveillance infrastructure.
Steam credit-card gating: Valve requires UK Steam accounts to register a credit card before viewing "mature" games or community hubs, pitching it as a privacy-preserving way to meet OSA duties. However, credit-card requirements create financial barriers and may exclude users who cannot afford cards or prefer not to link payment methods to gaming accounts (PC Gamer, 2025).
Geo-blocking and exit
Some platforms have geo-blocked UK users rather than implement age assurance. File-sharing hosts including Krakenfiles and Nippydrive have geoblocked UK users; smaller sites have shut down rather than pay compliance costs. These responses raise questions about whether the regime is achieving its safety goals or simply pushing harmful content to non-compliant platforms.
Small-site shutdowns: Volunteer-run communities including The Hamster Forum have shut down because they cannot afford commercial age-verification services. Independent game creators on Itch.io saw entire author pages blocked when a single upload triggered an adult flag. These outcomes illustrate how compliance costs squeeze small publishers and consolidate attention on the largest platforms (New Statesman, 2025).
Circumvention and VPN detection
Users have responded to age assurance requirements by using VPNs to access content—leading platforms to deploy VPN detection schemes. The Age Verification Providers Association (AVPA) has lobbied for behavioural profiling (UK-daytime activity, UK-English locale, following mostly UK accounts) and mandatory ID checks when VPN use is detected (Age Verification Providers Association, 2025).
Privacy implications: VPN detection schemes create mass behavioural surveillance—profiling users based on activity patterns, device signals, and location data. These schemes disproportionately flag privacy-protective adults, journalists, and abuse survivors while remaining porous to determined evaders. They risk enshrining surveillance infrastructure across the stack without clear gains in child protection.
Traffic shifts and market impacts
Ofcom's monitoring shows overall UK visits to pornography sites falling by roughly a third post-go-live, while Pornhub owner Aylo reports a 77% domestic drop and claims users are migrating to non-compliant rivals. Ofcom counters that the regime ends an "age-blind internet" and is escalating against holdouts (Biometric Update, 2025).
Market concentration: Traffic shifts and compliance costs favour large platforms that can amortise costs and absorb enforcement risk. Small sites face shutdown costs; users migrate to Big Tech platforms that can afford compliance. The regime may accelerate market concentration, reducing competition and user choice.
Privacy incidents and data breaches: the surveillance infrastructure leaks
Identity-first age assurance creates attractive targets for hackers and state surveillance. Recent breaches demonstrate that centralised identity databases are vulnerable—while raising questions about whether the regime is creating surveillance infrastructure that undermines privacy and security.
Discord contractor breach (October 2025)
In October 2025, Discord confirmed a contractor compromise that exposed approximately 70,000 users' government ID selfies and customer-support messages, despite the platform itself not being breached. The breach highlights how third-party contractors handling age verification data create new attack surfaces—while raising questions about whether platforms are adequately securing identity data (BBC News, 2025).
Impact: Stolen ID photos can be reused to open financial accounts, commit identity theft, and enable fraud. The breach demonstrates that centralised identity databases are attractive targets for hackers—and that breaches can have lasting consequences for users whose identity data is exposed.
Chrome VPN extension surveillanceware
A widely installed Chrome VPN extension was caught secretly capturing full-page screenshots and exfiltrating data—an example of how "workarounds" can also introduce surveillanceware. The extension, which had over 100,000 installs, demonstrates how users seeking privacy may inadvertently install malicious software that undermines their security (CyberInsider, 2025).
Implications: When users seek workarounds to age assurance requirements, they may install software that creates new surveillance risks. The incident highlights how identity-first defaults create incentives for users to seek alternatives—some of which may be malicious.
Cross-border data demands
Some age-verification vendors may face cross-border data demands from foreign governments, raising questions about whether identity data collected for UK compliance could be accessed by foreign authorities. Without strict data minimisation and on-device processing, age-assurance systems become attractive targets for state surveillance.
Privacy-preserving alternatives
Privacy-preserving age assurance (unlinkable credentials, on-device checks) reduces breach impact by minimising the data that can be stolen. If a breach occurs, attackers cannot access identity data that was never collected—demonstrating how PET-first approaches can achieve safety goals while reducing surveillance risk.
What to watch: Future breaches will test whether platforms and vendors are adequately securing identity data. If breaches continue, the regime may face pressure to prefer privacy-preserving methods that reduce breach impact.
Market concentration and small-site impacts
Compliance costs behave like fixed costs: large platforms amortise them; small services and community projects struggle. The regime creates pressure to geo-block the UK or adopt the cheapest-to-integrate (often most invasive) verification vendor—accelerating market concentration and reducing user choice.
Compliance cost structure
Typical quotes for small forums run into the low thousands annually (e.g., ~£2,400/year), and popular vendors' entry plans start around US $250/month on annual contracts. For volunteer-run communities, that pricing is effectively a shutdown tax—hence the wave of UK geo-blocks in mid-2025 (Persona, 2025).
Fixed-cost economics: Compliance costs are fixed costs that large platforms can amortise across millions of users, but small sites cannot. This creates pressure for small sites to shut down or geo-block UK users—reducing competition and user choice.
Small-site shutdowns
Volunteer-run communities including The Hamster Forum have shut down because they cannot afford commercial age-verification services. Independent game creators on Itch.io saw entire author pages blocked when a single upload triggered an adult flag. These outcomes illustrate how compliance costs squeeze small publishers and consolidate attention on the largest platforms (New Statesman, 2025).
Creative expression impacts: Small creators and independent publishers face disproportionate compliance costs, reducing diversity and creative expression. The regime may accelerate market concentration, favouring large platforms that can afford compliance over small sites that cannot.
Shared PET infrastructure
A standardised, privacy-preserving stack—unlinkable credentials, on-device checks, shared open tooling—would reduce per-service cost, improve security, and avoid lock-in. Until codes and procurement preference signal that direction, the market will skew toward identity uploads and proprietary SDKs.
What to watch: Ofcom's guidance and enforcement patterns will signal whether the regime favours shared PET infrastructure or proprietary identity vendors. If guidance prefers privacy-preserving methods, small sites may be able to comply without expensive commercial services.
International developments: EU chat control and US state laws
The UK Online Safety Act operates in an international context where other jurisdictions are pursuing similar regulatory approaches. EU "chat control" proposals and US state age-verification laws create parallel debates about encryption, privacy, and child protection—while raising questions about whether global trends favour surveillance infrastructure or privacy-preserving engineering.
EU "chat control" (CSA detection orders)
Parallel EU proposals (widely dubbed "chat control") would enable detection orders that, in practice, require scanning private messages—including E2EE contexts. Under Denmark's Council presidency (since 1 July 2025), compromise texts revived detection orders; a tentative vote target was set for mid-October (e.g., 14 Oct 2025). Civil-society groups and data-protection bodies argue such orders are incompatible with E2EE and fundamental rights. Political negotiations remain live into 2026 (European Union, 2025).
Implications: If EU "chat control" passes, it would create pressure for UK platforms to implement similar measures—potentially undermining E2EE across Europe. The outcome will shape whether Europe maintains encryption or creates surveillance infrastructure.
US state age-verification laws
US states including Texas have enacted age-verification laws requiring adult sites to verify users' ages. On 27 June 2025, the US Supreme Court upheld Texas HB 1181, allowing states to require age verification for adult content. However, platforms have responded by geo-blocking states or implementing workarounds—raising questions about effectiveness and privacy impacts (US Supreme Court, 2025).
Comparison with UK: US state laws create a patchwork of requirements that platforms must navigate, while the UK's national regime creates uniform obligations. However, both approaches raise similar questions about privacy, effectiveness, and market concentration.
International coordination
As multiple jurisdictions pursue age-verification requirements, platforms face increasing compliance costs and complexity. International coordination could reduce fragmentation—but may also create pressure for harmonised surveillance infrastructure. The outcome will shape whether global trends favour privacy-preserving engineering or identity-first defaults.
What's next: 2026 roadmap and critical decisions
The next 6–18 months will define the Online Safety Act's trajectory. Critical decisions on encryption, enforcement patterns, and privacy-preserving compliance will shape whether the regime achieves safety goals through privacy-preserving engineering or cements identity-first surveillance infrastructure.
Early 2026: Additional Safety Measures guidance
Ofcom's final guidance on "Additional Safety Measures," expected in early 2026, will crystallise the encryption debate. If guidance explicitly disallows client-side scanning, the regime may preserve E2EE; if it allows "proactive technology" when "technically feasible," the regime may create incentives for platforms to undermine encryption.
What to track: Guidance language on E2EE compatibility; whether PET-first methods are treated as first-class compliance; enforcement patterns that signal regulatory preferences.
2026: Categorisation register and Category 1/2A/2B duties
Ofcom is expected to publish the register of categorised services in 2026, identifying Category 1 (very large user-to-user), Category 2A (search), and Category 2B (user-to-user with direct messaging) platforms. These platforms will face additional duties beyond children's protections—creating new compliance obligations and enforcement opportunities.
What to track: Which platforms are categorised; whether non-profits (e.g., Wikipedia) appear on Category 1 list; how categorised platforms implement additional duties.
2026–2027: Enforcement patterns and appeals
As enforcement intensifies, patterns will emerge about which compliance approaches Ofcom favours. Appeals and judicial reviews will test proportionality and Article 8 compatibility—potentially shaping whether the regime preserves privacy or creates surveillance infrastructure.
What to track: Number and size of penalties; proportion citing biometric/image-retention failures; appeals outcomes; judicial review decisions on proportionality.
Mid-2027: Women and girls safety assessment
Ofcom's assessment of provider implementation of women and girls safety duties, planned for mid-2027, will signal whether platforms are meeting their obligations and whether implementation patterns favour privacy-preserving or identity-first approaches.
What to track: Platform implementation patterns; whether privacy-preserving methods are used; assessment findings and enforcement actions.
Privacy-preserving compliance: PET-first approaches gain traction
Despite early industry deployments skewing toward identity-first defaults, privacy-preserving age assurance is gaining traction. Standards and open implementations exist; early deployments show that PET-first approaches can meet Ofcom's "highly effective" bar while minimising surveillance risk.
Selective-disclosure credentials
Selective-disclosure credentials (W3C Verifiable Credentials 2.0, BBS+) allow banks and mobile network operators to issue "over-18" attestations during familiar KYC flows. Users hold credentials locally in a wallet; platforms verify zero-knowledge proofs that never reveal identity or exact age. Each presentation is unlinkable across sites—achieving age assurance without creating identity databases (World Wide Web Consortium, 2024).
Implementation status: Standards exist (W3C VCs 2.0, IETF Privacy Pass); early deployments show that user journeys can be as quick as document uploads. What's missing is regulatory preference signalling—codes and enforcement that treat PET-first as the default, not an exotic exception.
On-device age estimation
On-device age estimation uses local models to perform one-off checks; only a binary flag leaves the device; no face images or templates transit to servers. This approach minimises identifiability, linkability, and observability—achieving age assurance without centralised biometric stores.
Challenges: Device diversity and certification requirements may limit adoption. However, on-device approaches reduce breach impact and surveillance risk—demonstrating how PET-first methods can achieve safety goals while preserving privacy.
Governance and standards
Privacy-preserving age assurance requires governance: DPIAs, bias audits, no raw biometrics stored, independent security reviews. Standards including BSI PAS 1296, ISO/IEC 30107-3, W3C VCs, and IETF Privacy Pass provide frameworks for implementation—but regulatory preference signalling is needed to drive adoption.
What's needed: regulatory preference signalling
Ofcom's guidance and enforcement patterns will signal whether PET-first approaches are treated as first-class compliance. If codes prefer privacy-preserving methods, platforms will adopt them; if codes favour identity uploads, surveillance infrastructure will entrench. The choice is regulatory—and the next 6–18 months will define it.
Civic action and legal challenges: the resistance continues
Civil society groups, technical organisations, and users continue to resist identity-first defaults and encryption-undermining measures. Legal challenges, parliamentary oversight, and public campaigns shape whether the regime preserves privacy or creates surveillance infrastructure.
Legal challenges
Platforms including Wikipedia and 4chan have challenged Ofcom's categorisation and enforcement decisions. While Wikipedia's judicial review was dismissed, the High Court stressed that Ofcom must shield Wikipedia from disproportionate duties—signalling that legal challenges can shape enforcement patterns (Wikimedia Foundation, 2025).
Future challenges: Proportionality and Article 8 compatibility challenges could test whether identity-first defaults and encryption-undermining measures are compatible with fundamental rights. The outcome will shape whether the regime preserves privacy or creates surveillance infrastructure.
Parliamentary oversight
On 12 November 2025, the Secretary of State for Science, Innovation and Technology expressed disappointment in delays to implementation—signalling government pressure for faster enforcement. However, parliamentary oversight can also push for proportionality tests, cost transparency, and explicit PET preference in codes.
What to watch: Parliamentary questions, select committee inquiries, and constituency pressure will shape whether the regime favours privacy-preserving engineering or identity-first defaults.
Public campaigns
The Open Rights Group's campaign—"Tell your MP: The Online Safety Act isn't working"—highlights wrongful censorship, restrictions on teens' expression, hand-off of data to unregulated vendors, and the ease with which young people bypass the law using VPNs. The campaign provides a one-click route to contact MPs—demonstrating how public pressure can shape regulatory outcomes (Open Rights Group, 2025).
Petition: A 500k-plus petition to repeal the Act captured public disquiet—an inchoate coalition of parents, technologists, and civil-liberties supporters saying: we want safety; we do not want an ID checkpoint for the lawful internet. Whether Parliament listens is a separate question, but democratic legitimacy erodes when implementation collides with settled privacy norms.
Technical community resistance
Technical experts, civil society groups, and data protection bodies continue to argue that identity-first defaults and encryption-undermining measures are incompatible with fundamental rights. The resistance shapes whether the regime preserves privacy or creates surveillance infrastructure—and the next 6–18 months will define it.
Bottom line: surveillance by default or privacy by design?
Six months after the Online Safety Act's children's protections went live, the regime stands at a crossroads. Enforcement has intensified—£1 million fines, industry fees, expanded investigations—while privacy incidents and market concentration raise questions about whether the regime is achieving its safety goals without eroding fundamental rights.
The critical question: Will the UK build safety through privacy-preserving engineering, or cement identity-first surveillance infrastructure? Early enforcement patterns do not explicitly prefer PET-first approaches; privacy incidents demonstrate that centralised identity databases are vulnerable; market concentration favours Big Tech over small sites.
The path forward: A rights-preserving path exists: PET-first age assurance; encryption kept whole; design duties over device scanners; transparency and accountability over identity walls. The country that gave the world modern privacy law does not need to choose between children and civil liberties. It needs to choose competent regulation.
What to watch: Ofcom's "Additional Safety Measures" guidance, expected in early 2026, will crystallise the encryption debate. Enforcement patterns will signal whether PET-first methods are treated as first-class compliance. Market responses will shape whether small sites can afford compliance or face shutdown costs. The next 6–18 months will define whether the regime preserves privacy or creates surveillance infrastructure.
The choice is regulatory—and the clock is ticking.
References
References
- [1]Age Verification Providers Association (2025) 'Position papers on age assurance, VPN risk, and circumvention', AVPA Publications. Available at: https://www.avpassociation.com/publications (Accessed: 15 December 2025).
- [2]BBC News (2025) 'ID photos of 70,000 users may have been leaked, Discord says', BBC News. Available at: https://www.bbc.co.uk/news/technology-discord-id-leak (Accessed: 15 December 2025).
- [3]Biometric Update (2025) 'Is age verification killing porn site traffic? Aylo says yes, AVPA says no', Biometric Update. Available at: https://www.biometricupdate.com/2025/age-verification-porn-traffic (Accessed: 15 December 2025).
- [4]CyberInsider (2025) 'Chrome VPN extension with 100k installs screenshots all sites users visit', CyberInsider. Available at: https://www.cyberinsider.com/chrome-vpn-extension-screenshots (Accessed: 15 December 2025).
- [5]Department for Science, Innovation and Technology (2025) 'Implementation and enforcement of the Online Safety Act: follow-up letter to Ofcom', DSIT Correspondence. Available at: https://www.gov.uk/government/publications/online-safety-act-enforcement-letter (Accessed: 15 December 2025).
- [6]European Court of Human Rights (2024) 'Podchasov v. Russia, App. no. 33696/19', HUDOC. Available at: https://hudoc.echr.coe.int/eng?i=001-230854 (Accessed: 15 December 2025).
- [7]European Union (2025) 'CSA detection orders proposal; Council presidency materials', EUR-Lex. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0209 (Accessed: 15 December 2025).
- [8]New Statesman (2025) 'Big Tech is the only winner of the Online Safety Act', New Statesman. Available at: https://www.newstatesman.com/politics/2025/big-tech-online-safety-act (Accessed: 15 December 2025).
- [9]Ofcom (2025) 'AVS Group Ltd fined £1 million for inadequate age verification', Ofcom Enforcement Actions. Available at: https://www.ofcom.org.uk/news-centre/2025/avs-group-fine (Accessed: 15 December 2025).
- [10]Ofcom (2025) 'Itai Tech Ltd fined £50,000 for failing to implement age assurance', Ofcom Enforcement Actions. Available at: https://www.ofcom.org.uk/news-centre/2025/itai-tech-fine (Accessed: 15 December 2025).
- [11]Ofcom (2025) '4chan investigation expanded; £20k penalty for non-compliance', Ofcom Enforcement Actions. Available at: https://www.ofcom.org.uk/news-centre/2025/4chan-investigation (Accessed: 15 December 2025).
- [12]Ofcom (2025) 'Additional Safety Measures: Draft guidance and consultation', Ofcom Consultations. Available at: https://www.ofcom.org.uk/consultations-and-statements/online-safety-act/additional-safety-measures (Accessed: 15 December 2025).
- [13]Ofcom (2025) 'Guidance on online safety for women and girls', Ofcom Online Safety. Available at: https://www.ofcom.org.uk/online-safety/women-and-girls-guidance (Accessed: 15 December 2025).
- [14]Open Rights Group (2025) 'Tell your MP: The Online Safety Act isn't working', Open Rights Group Campaigns. Available at: https://www.openrightsgroup.org/campaigns/online-safety-act-isnt-working (Accessed: 15 December 2025).
- [15]PC Gamer (2025) 'Steam users in the UK who want mature game content must now register a credit card', PC Gamer. Available at: https://www.pcgamer.com/steam-uk-credit-card-mature-content (Accessed: 15 December 2025).
- [16]Persona (2025) 'Pricing', Persona Identity Verification. Available at: https://withpersona.com/pricing (Accessed: 15 December 2025).
- [17]UK Government (2025) 'Online Safety Act 2023 (Fees Notification) Regulations 2025', UK Statutory Instruments. Available at: https://www.legislation.gov.uk/uksi/2025/fees-notification (Accessed: 15 December 2025).
- [18]US Supreme Court (2025) 'Free Speech Coalition, Inc. v. Paxton (No. 23-1122)', Supreme Court of the United States. Available at: https://www.supremecourt.gov/opinions/24pdf/23-1122_k536.pdf (Accessed: 15 December 2025).
- [19]Wikimedia Foundation (2025) 'Updates on OSA Categorisation challenge; High Court judgment', Wikimedia Foundation News. Available at: https://wikimediafoundation.org/news/osa-categorisation-challenge (Accessed: 15 December 2025).
- [20]World Wide Web Consortium (2024) 'Verifiable Credentials Data Model 2.0; IETF Privacy Pass Architecture RFC 9576', W3C Recommendations. Available at: https://www.w3.org/TR/vc-data-model-2.0/ (Accessed: 15 December 2025).
