Every morning, 1.3 billion people wake up, touch their phone, and prove they exist to the government. In India, accessing a bank account, buying a SIM card, or collecting rations requires biometric authentication through Aadhaar, the world's largest identity database. In Estonia, 98% of citizens use digital ID for everything from voting to healthcare, yet they can see every time the government accesses their data. In China, your face unlocks your phone, pays for groceries, and reports your location to surveillance systems. This is normal now. The question isn't whether digital ID is coming; it's whether it arrives with privacy protections or comprehensive surveillance. [1, 6, 7]
1. Executive summary
Digital identity systems evolved from isolated national experiments to a global norm, with over 90% of countries now maintaining digitized primary ID records. [1] While these systems promise efficiency gains, fraud reduction, and service modernization, they simultaneously raise profound questions about privacy, surveillance, and civil liberties. This analysis examines whether digital identity systems can be designed with privacy-by-design principles or inevitably enable surveillance infrastructure.
- Global adoption: Over 90% of countries have digitized primary ID records, with 40% providing fully digital ID ecosystems for online authentication. [1] However, 850 million people globally still lack any official ID, and 3.3 billion have no access to government-provided digital ID credentials. [2]
- Privacy spectrum: Implementation ranges from Estonia's privacy-preserving e-ID system with distributed data exchange [6] to India's Aadhaar covering 1.37 billion people with comprehensive biometric surveillance, [7] demonstrating that technical architecture choices significantly impact privacy outcomes.
- Function creep evidence: Even well-intentioned systems show tendencies toward surveillance expansion. India's Aadhaar expanded from 13 use cases (2016) to 1,200+ integrations (2024), [17] while the UK's voluntary One Login now has de facto mandatory requirements for employment and housing verification.
- Vendor ecosystem: Digital ID verification is increasingly outsourced to private vendors (Yoti, Onfido, Jumio, AU10TIX) with opaque data practices. Yoti processes 11 million verifications annually, [18] while Onfido claims 15 million checks per year. [19] These commercial surveillance entities create privacy risks beyond government oversight.
Premium Research Content
Continue reading this in-depth analysis on Substack
2. The global digital ID landscape: from voluntary to mandatory
Digital identity systems worldwide exist on a spectrum from voluntary, privacy-preserving implementations to mandatory, surveillance-integrated programmes. Understanding this landscape reveals both the potential for privacy-by-design and the risks of surveillance creep.
Voluntary systems: Estonia and Switzerland
Estonia's e-ID system represents one of the most successful privacy-preserving implementations. The system uses distributed data exchange (X-Road) where different data streams remain in separate databases and are queried in a controlled manner, rather than creating one giant identity database. Estonia's approach demonstrates that comprehensive digital government services can be delivered while maintaining privacy protections through technical architecture choices.
Switzerland maintains a voluntary approach to digital identity, with strong privacy laws and limited government surveillance capabilities. The Swiss system emphasizes user control and data minimization, showing that cultural and legal frameworks can support privacy-preserving digital identity.
Creeping mandatory: UK and Australia
The UK's approach illustrates how voluntary systems can become de facto mandatory through sectoral requirements. While avoiding a national ID card, the UK has implemented digital verification requirements for housing (Right to Rent) and employment (Right to Work), creating comprehensive surveillance capabilities without explicit mandatory enrollment.
Australia's Digital ID Act 2024 represents a similar pattern, with voluntary individual enrollment but mandatory participation for agencies and private sector entities. This creates pressure for universal adoption while maintaining the appearance of choice.
Mandatory surveillance: China and India
China's digital identity system integrates with the Social Credit System, facial recognition networks, and real-name internet registration requirements. The system enables comprehensive surveillance of citizens' activities, from financial transactions to social media posts, creating unprecedented state control capabilities.
India's Aadhaar system, covering over 1.3 billion people, demonstrates how biometric identity systems can become de facto mandatory despite legal claims of voluntariness. The system's integration with banking, telecommunications, and government services creates comprehensive surveillance capabilities while enabling efficient service delivery.
Global digital ID timeline (2002-2026)
3. Privacy-enhancing technologies: theory vs practice
Privacy-enhancing technologies (PETs) offer theoretical solutions for digital identity systems that preserve privacy while enabling verification. However, implementation patterns reveal significant gaps between theoretical capabilities and practical deployment.
Zero-knowledge proofs and selective disclosure
Zero-knowledge proof systems enable verification of identity attributes without revealing underlying data. For example, a system could prove that a user is over 18 without revealing their exact age or birth date. The European Union's forthcoming Digital Identity Wallet incorporates selective disclosure mechanisms, allowing users to choose what data to share.
However, implementation challenges include user experience complexity, technical infrastructure requirements, and resistance from service providers who prefer comprehensive data collection. Most deployed systems default to full data sharing rather than selective disclosure.
Decentralised and self-sovereign identity
Self-sovereign identity (SSI) systems enable users to control their identity data while enabling verification when needed. These systems use cryptographic proofs to verify attributes without revealing underlying personal data or enabling cross-service correlation. Germany's IDunion consortium explores decentralised identity approaches.
Despite technical feasibility, SSI systems face adoption challenges including user education requirements, interoperability issues, and resistance from centralised systems that benefit from data concentration. Most countries continue to implement centralised approaches.
On-device processing and local storage
On-device processing enables identity verification without transmitting personal data to central servers. Biometric matching can occur locally on user devices, with only verification results transmitted to service providers. This approach minimizes data exposure and prevents central database breaches.
Implementation barriers include device compatibility requirements, security concerns about local storage, and resistance from service providers who prefer server-side verification for audit and control purposes.
4. Country case studies: privacy leaders and surveillance states
Examining specific country implementations reveals the factors that determine whether digital identity systems preserve privacy or enable surveillance. Technical architecture, legal frameworks, and cultural attitudes toward privacy all play crucial roles.
Privacy leaders: Estonia and Germany
Estonia's e-ID system demonstrates that comprehensive digital government services can be delivered while maintaining privacy protections. Launched in 2002, the system now covers 98% of Estonia's 1.3 million citizens and enables 99% of government services online. [6] The system's privacy-preserving architecture relies on three core principles:
- Distributed data exchange (X-Road): Instead of a central database, Estonia's X-Road architecture distributes data across 2,900+ databases maintained by different agencies. [6] When a service needs to verify identity, it queries only the specific database with the required information. This prevents the creation of comprehensive citizen profiles.
- Data access transparency: Citizens can log into the State Portal to see every instance when a government agency or service provider accessed their data, including who accessed it, when it happened, and what information was viewed. Between 2021-2023, citizens filed 127 complaints about unauthorized data access, resulting in prosecution and fines. [11]
- Hardware-backed cryptography: Estonia's ID cards use chip-embedded private keys that never leave the card. Cryptographic signatures are computed on the card itself, making key extraction computationally infeasible. [6]
However, even Estonia shows function creep tendencies. The 2017 cryptographic vulnerability in chip keys required replacing 760,000 ID cards at a cost of €6.8 million, [12] and the government has expanded X-Road integrations from 600 services (2010) to 2,900+ (2024).
Germany's eID system (launched 2010) emphasizes user control and privacy-by-design. The system uses hardware-backed keys in the Personalausweis (national ID card) and requires explicit user consent for each data sharing transaction via PIN entry. [8] Key privacy features include:
- No central logging: Germany's Federal Office for Information Security (BSI) designed the system so that no central authority can track which services users access. Identity providers see authentication requests but not service destinations. [8]
- Selective attribute disclosure: Users can choose to share only necessary attributes (e.g., "over 18" verification without sharing exact birthdate). In practice, however, service providers typically request full data sharing, and users consent to avoid friction. [13]
- Open-source verification software: Germany's AusweisApp2 software for eID authentication is open-source, [14] enabling independent security audits and trust verification.
Adoption has been slow: Only 47% of eligible German citizens have activated the eID function on their ID cards as of 2024. [15] This reflects both privacy-conscious resistance to digital identity and user experience friction.
Surveillance states: China and India
China's digital identity system represents comprehensive surveillance-by-design. The system integrates national ID cards, facial recognition networks, social credit scoring, and real-name internet registration to enable detailed tracking of citizens' activities. [9] Key surveillance capabilities:
- Facial recognition integration: China operates 700+ million surveillance cameras with facial recognition capabilities. [20] Citizens are identified in real-time when entering metro stations, airports, schools, and public spaces. Integration with digital ID enables correlation of physical movements with online activities.
- Social Credit System: China's Social Credit System (launched 2014, fully operational 2020) tracks citizens' financial transactions, social media posts, legal compliance, and behavioral patterns. [9] Low scores restrict access to high-speed rail (23 million trips denied, 2018-2021), flights (17.5 million bookings blocked), and private schools. [21]
- Real-name internet registration: All internet users must register with their national ID number. [22] This enables linking online activities (social media posts, search queries, e-commerce) to physical identity, creating comprehensive digital profiles.
China's system demonstrates how digital identity can enable authoritarian control. The integration of biometric surveillance, behavioral scoring, and digital services creates comprehensive monitoring capabilities with minimal technical friction.
India's Aadhaar system is the world's largest biometric identity database, covering 1.37 billion people (96.4% of population). [7] Launched in 2009 by the Unique Identification Authority of India (UIDAI), Aadhaar collects:
- Biometric data: 10 fingerprints, 2 iris scans, and facial photograph for each enrollee. As of 2024, UIDAI stores 13.7 billion fingerprints and 2.74 billion iris scans. [7]
- Demographic data: Name, date of birth, address, gender, and mobile number linked to a 12-digit Aadhaar number.
- Authentication logs: UIDAI processes 80-100 million authentication requests daily (29-36 billion annually), creating comprehensive records of service access patterns. [23]
While initially voluntary, Aadhaar became de facto mandatory through integration with essential services. As of 2024, Aadhaar is required for: [17]
- Banking (mandatory for new accounts since 2017)
- SIM card registration (mandatory since 2017)
- Income tax filing (required for PAN card linking)
- Government welfare programmes (pensions, rations, subsidies)
- School admissions (many states require Aadhaar for enrollment)
- Employment (EPFO requires Aadhaar for provident fund accounts)
Privacy failures: Aadhaar has experienced major data breaches, exclusion of vulnerable populations, and surveillance concerns:
- 2018 data breach: 1.1 billion Aadhaar records (including names, addresses, and partial ID numbers) were exposed on a government portal and sold for ₹500 ($6). [24]
- Biometric authentication failures: Rural workers often have worn fingerprints from manual labor, causing verification failures. In 2017-2018, 1 million+ workers in Jharkhand state were denied wages due to biometric authentication failures. [25]
- Private sector data sharing: Telecom companies, banks, and e-commerce platforms now link Aadhaar to user accounts, creating commercial surveillance infrastructure. Reliance Jio (India's largest telecom) links 400+ million subscribers to Aadhaar. [26]
Case: Motka Majhi, 67-year-old construction worker denied rations
Motka Majhi, a manual laborer from Jharkhand, spent 40 years doing construction work. His fingerprints were worn smooth from decades of concrete mixing and brick laying. When Aadhaar-based biometric authentication became mandatory for receiving subsidized rations through the Public Distribution System in 2017, the scanner repeatedly failed to read his fingerprints. [25]
Impact: Motka was denied rations for three consecutive months (October-December 2017). He lost 12 kilograms (26 pounds) during this period. His family borrowed money at 10% monthly interest to buy food at market prices. Eventually, an NGO intervened and UIDAI granted an exception allowing iris scan authentication. By then, Motka had accumulated ₹8,000 in debt (approximately $100, or two months' wages) and required medical treatment for malnutrition.
Systemic impact: The Internet Freedom Foundation documented 73 similar starvation cases in Jharkhand alone between 2017-2019, with at least 11 deaths directly attributed to ration denial due to biometric authentication failures. [25] UIDAI's official position is that "alternatives exist" (iris scan, OTP), but ground-level implementation often treats fingerprint failure as disqualification.
India's Supreme Court ruled in 2018 that Aadhaar could not be mandatory for private services, [27] but enforcement is weak. Function creep continues: Aadhaar expanded from 13 authorized use cases (2016) to 1,200+ integrations (2024). [17]
Mixed implementations: EU and US
The European Union's eIDAS framework aims to harmonize digital identity across 27 member states while preserving privacy through the eIDAS 2.0 regulation (effective September 2023). [3] The regulation mandates that all member states offer a European Digital Identity Wallet (EDIW) by 2026, supporting:
- Selective disclosure: Users choose which attributes to share (e.g., "over 18" vs full birthdate). [3]
- Cross-border recognition: EDI wallets issued in one member state must be recognized in all 27 countries.
- Private sector use: Banks, telecom providers, and e-commerce platforms can request EDIW verification, expanding digital ID from government services to commercial surveillance.
However, implementation varies significantly between member states. Germany and Estonia emphasize privacy-preserving architectures, while Italy and Spain implement more centralised systems. [28] Privacy risks include:
- Mandatory browser integration: Article 45(2) of eIDAS 2.0 requires browsers to trust government-designated Certificate Authorities without transparency requirements, enabling potential surveillance. [29] Mozilla, Google, and Apple opposed this provision, warning it could enable government-mandated interception. [30]
- Commercial expansion: eIDAS 2.0 explicitly allows private sector use, raising concerns about commercial surveillance and cross-service tracking. [3]
The United States lacks a national digital identity system, relying instead on fragmented state-level implementations and private sector solutions. This fragmentation creates privacy risks through multiple identification systems while avoiding central government surveillance.
- State mobile driver's licences (mDL): 13 states issue mobile driver's licences via private vendors (IDEMIA, Thales, GET Group). [31] Each vendor uses proprietary systems with different privacy practices, creating fragmentation and vendor lock-in.
- Real ID Act (2005): Establishes minimum standards for state-issued IDs but does not create digital identity infrastructure. [32] Full enforcement begins May 2025.
- Commercial ID verification: US residents increasingly use commercial services (ID.me, Clear, LexisNexis) for identity verification. ID.me, used by the IRS and 30 states for benefits access, has 130+ million users. [33] These commercial systems create comprehensive surveillance infrastructure outside government oversight.
The US approach creates privacy risks through commercial surveillance rather than government surveillance. Private vendors aggregate comprehensive identity data across multiple services without the legal protections and oversight that apply to government systems.
5. Digital ID vendor analysis: privacy practices comparison
Governments increasingly outsource digital identity verification to private vendors rather than building in-house systems. This creates a commercial surveillance ecosystem with inconsistent privacy practices, opaque data handling, and profit incentives that conflict with privacy-by-design principles. [34]
Major digital ID verification vendors
Four vendors dominate the global digital ID verification market: Yoti (UK), Onfido (UK, acquired by Entrust 2024), Jumio (US), and AU10TIX (Israel). These vendors process hundreds of millions of identity verifications annually for governments, banks, social media platforms, and age verification systems. [18][19][35][36]
| Vendor | Annual Verifications | Data Retention | Biometric Storage | Third-Party Sharing | Privacy Score |
|---|---|---|---|---|---|
| Yoti (UK) | 11 million [18] | 90 days (claims), indefinite for "fraud prevention" [37] | Yes (facial biometrics, encrypted) | Yes (with client companies, analytics partners) [37] | 5/10 |
| Onfido (UK) | 15 million [19] | 7 years (compliance), 30 days for failed verifications [38] | Yes (facial biometrics, stored for 7 years) | Yes (with client companies, sub-processors) [38] | 4/10 |
| Jumio (US) | 1 billion+ (cumulative) [35] | Up to 365 days, indefinite for "regulatory compliance" [39] | Yes (facial biometrics, liveness detection data) | Yes (with client companies, analytics vendors) [39] | 3/10 |
| AU10TIX (Israel) | Undisclosed (serves X/Twitter, TikTok, Uber) [36] | Indefinite (unclear from privacy policy) [40] | Yes (facial biometrics, document scans) | Yes (extensive third-party sharing) [40] | 2/10 |
Privacy concerns across vendors
- Indefinite data retention: All vendors use vague "regulatory compliance" or "fraud prevention" exceptions to retain identity data indefinitely, despite claims of time-limited storage. Yoti's privacy policy states data is deleted after 90 days, [37] but the policy includes broad exceptions that enable indefinite retention for "legal obligations" and "fraud prevention."
- Biometric database accumulation: Vendors build massive biometric databases that create centralised surveillance infrastructure. Onfido stores facial biometrics for 7 years, [38] meaning a single verification in 2024 creates a biometric record until 2031. Jumio has processed 1+ billion verifications cumulatively, [35] creating one of the world's largest commercial facial recognition databases.
- Third-party data sharing: All vendors share identity data with client companies (the platforms requesting verification) and analytics partners. AU10TIX's privacy policy lists 12+ categories of third-party recipients, [40] including "marketing partners," "analytics providers," and "affiliates."
- Cross-service correlation: When a single vendor (e.g., Yoti) verifies identity for multiple services (UK age verification, social media, banking), they can correlate user activity across platforms, even if the platforms themselves do not share data. This creates commercial surveillance infrastructure similar to ad-tech tracking.
- Lack of user control: Users typically cannot request deletion of their biometric data from vendor databases, even after account closure. Vendors claim deletion would violate "fraud prevention" obligations, creating permanent biometric records without user consent or control.
Government reliance on commercial vendors
Governments worldwide increasingly rely on these commercial vendors for digital identity verification, outsourcing surveillance capabilities to private companies:
- UK age verification: The UK's age verification framework (effective 2025 under Online Safety Act) relies heavily on Yoti. [41] Pornhub, OnlyFans, and other adult platforms use Yoti for age checks, creating comprehensive records of UK citizens' adult content access.
- US government benefits: ID.me verifies identity for the IRS, Social Security Administration, and 30 state unemployment systems. [33] The company has faced criticism for facial recognition accuracy issues and opaque data practices. [42]
- Australia's Digital ID system: Australia's Digital ID Act 2024 allows private vendors to become accredited identity providers. [43] This outsources government identity verification to commercial surveillance entities.
This commercial outsourcing creates accountability gaps: vendors are not subject to Freedom of Information Act requests, parliamentary oversight, or the same privacy obligations that apply to government agencies. Users who object to commercial surveillance have no alternative path to access essential government services.
Vendor profit analysis: financial incentives for surveillance expansion
Commercial digital ID vendors have significant financial incentives to expand mandatory verification requirements and resist privacy-preserving alternatives. Revenue models depend on transaction volume and data retention:
Vendor revenue and valuations (2024):
- Yoti: £47 million revenue (2023), 87% from government contracts [65]
- Onfido: $650 million valuation at acquisition by Entrust (2024) [66]
- ID.me: $240 million revenue (2023), including $86 million IRS contract alone [67]
- Jumio: $100+ million ARR (annual recurring revenue), 2024 estimate [68]
- IDEMIA: €4.1 billion revenue (2023), digital ID division represents 18% [69]
Implication: Commercial incentives favor mandatory digital ID expansion over privacy-preserving alternatives. Yoti's business model depends on governments requiring age verification; ID.me's IRS contract requires facial recognition (despite alternatives existing). When governments propose privacy-by-design systems (on-device verification, zero-knowledge proofs), vendors lobby against them because they eliminate transaction fees and data collection opportunities.
Example: During UK Online Safety Act consultations (2023), the Age Verification Providers Association (dominated by Yoti, Onfido) lobbied against on-device age estimation, arguing it was "insufficiently reliable" despite academic studies showing 95%+ accuracy. [70] The actual concern: on-device verification generates zero revenue and creates no biometric database. Vendors profit from upload-based verification even when privacy-preserving alternatives exist.
How to delete your data from vendors (GDPR Article 17 requests)
If you've been verified by these vendors, you have the legal right to request data deletion under GDPR (UK/EU) or similar laws:
Yoti deletion request:
Email: privacy@yoti.com
Include: Your email, verification date(s), demand deletion under GDPR Article 17. If refused, escalate to ICO (UK) at ico.org.uk/make-a-complaint
Onfido deletion request:
Email: dataprotection@onfido.com
Onfido retains data for 7 years by default. Cite GDPR Article 17 and demand immediate deletion. Request proof of deletion from all backups.
Jumio deletion request:
Submit via: jumio.com/privacy-requests
US-based company; GDPR applies only if you're EU/UK resident. California residents can use CCPA rights.
AU10TIX deletion request:
Email: dpo@au10tix.com
Israeli company with unclear retention policies. Expect resistance. Document all correspondence for regulatory complaint if refused.
ID.me deletion request (US):
Account portal: id.me/settings → "Delete account"
⚠️ Deleting ID.me account may block access to IRS, SSA, and state services using ID.me. No GDPR protection for US citizens; rely on CCPA (California) or state-specific laws.
Note: Vendors often refuse deletion requests citing "fraud prevention" or "legal obligations." If refused, file complaints with your data protection authority: ICO (UK), EDPB (EU), California AG (CCPA). Include copies of your deletion request and their refusal.
6. Architecture analysis: centralised vs distributed systems
The technical architecture of digital identity systems fundamentally determines their privacy implications. Centralised systems create single points of failure and surveillance, while distributed systems can preserve privacy through data minimization and user control.
Centralised architectures: surveillance by design
Centralised digital identity systems store comprehensive user data in single databases, enabling efficient verification but creating surveillance capabilities. India's Aadhaar system stores biometric and demographic data centrally, allowing correlation of user activities across multiple services.
Centralised systems face significant security risks, as breaches can expose comprehensive user profiles. Estonia's 2017 cryptographic vulnerability, which required replacement of 760,000 ID cards, demonstrates the risks of centralised identity infrastructure. [12]
Distributed architectures: privacy by design
Distributed systems store data across multiple databases or enable local verification without central storage. Estonia's X-Road system distributes data across different agencies, preventing comprehensive citizen profiling while enabling service delivery. [6]
Self-sovereign identity systems take distribution further by enabling users to control their identity data locally. These systems use cryptographic proofs to verify attributes without revealing underlying data or enabling cross-service correlation. [5]
Federated systems: compromise and complexity
Federated systems distribute identity verification across multiple providers while maintaining interoperability. The UK's retired GOV.UK Verify system used certified private identity providers, theoretically preventing any single entity from controlling access to government services.
However, federated systems can create new privacy risks by requiring users to share personal data with multiple commercial providers. The complexity of federated systems can also lead to poor user experience and low adoption rates. GOV.UK Verify was retired in 2023 after failing to achieve adoption targets (43% of target enrollment). [44]
7. Global privacy scorecard: rating 10 countries
This privacy scorecard rates 10 countries' digital identity systems across five dimensions: architecture (centralised vs distributed), legal protections, surveillance integration, function creep, and user control. Each dimension is scored 0-2 points, with higher scores indicating better privacy protections.
| Country | Architecture | Legal Protections | Surveillance Integration | Function Creep Resistance | User Control | Total Score |
|---|---|---|---|---|---|---|
| Estonia | 2 (distributed) | 2 (strong GDPR) | 2 (minimal) | 1 (some creep) | 2 (full transparency) | 9/10 |
| Germany | 2 (no central logs) | 2 (strong GDPR) | 2 (minimal) | 2 (voluntary) | 2 (PIN consent) | 10/10 |
| Switzerland | 2 (decentralised) | 2 (strong DPA) | 2 (minimal) | 2 (voluntary) | 2 (user control) | 10/10 |
| EU (eIDAS 2.0) | 1 (varies by state) | 2 (strong GDPR) | 1 (commercial use) | 1 (expanding) | 1 (selective disclosure) | 6/10 |
| United Kingdom | 1 (centralised One Login) | 1 (UK GDPR, weakening) | 1 (commercial vendors) | 0 (de facto mandatory) | 1 (limited transparency) | 4/10 |
| United States | 0 (fragmented commercial) | 1 (weak, state-level) | 0 (commercial surveillance) | 0 (unchecked expansion) | 0 (vendor control) | 1/10 |
| Australia | 1 (federated commercial) | 1 (Privacy Act 1988) | 1 (commercial vendors) | 0 (expanding rapidly) | 1 (limited) | 4/10 |
| India | 0 (centralised biometric DB) | 1 (weak enforcement) | 0 (comprehensive tracking) | 0 (massive expansion) | 0 (no user control) | 1/10 |
| China | 0 (centralised surveillance) | 0 (no privacy protections) | 0 (total integration) | 0 (authoritarian control) | 0 (no user rights) | 0/10 |
| Kenya | 0 (centralised) | 2 (strong court rulings) | 1 (limited integration) | 1 (court checks) | 1 (some transparency) | 5/10 |
Key insights from the scorecard
- Perfect scores (10/10): Germany and Switzerland demonstrate that comprehensive digital government services are achievable with strong privacy protections. Both systems emphasize user control, distributed architecture, and minimal surveillance integration.
- Near-perfect (9/10): Estonia's distributed X-Road architecture shows slight function creep (600 → 2,900 services) but maintains strong legal protections and user transparency. [6]
- Middling scores (4-6/10): The UK, Australia, Kenya, and EU eIDAS 2.0 show mixed privacy outcomes. Legal protections exist but are undermined by commercial vendor outsourcing and function creep.
- Failing scores (0-1/10): China, India, and the US receive the lowest scores. China for authoritarian surveillance-by-design, India for biometric database scale and weak enforcement, US for commercial surveillance fragmentation without legal protections.
- Commercial vendors as the primary threat: Countries with strong legal frameworks (UK, Australia, US) still score poorly when they outsource digital ID to commercial vendors (Yoti, Onfido, ID.me), which operate outside government oversight and accountability.
8. Surveillance integration and function creep
Digital identity systems show consistent patterns of surveillance integration and function creep, where systems designed for limited purposes expand to enable comprehensive monitoring and control. This expansion occurs regardless of initial privacy intentions, with quantifiable evidence across multiple countries.
Quantified function creep: India's Aadhaar
India's Aadhaar system provides the clearest evidence of function creep, with documented expansion from limited government services to comprehensive life tracking: [17]
- 2016: 13 authorized use cases (government welfare, subsidies, pensions)
- 2018: 220 integrations (banking, SIM cards, school admissions added)
- 2020: 580 integrations (private employers, e-commerce, hotels added)
- 2024: 1,200+ integrations (comprehensive tracking across government and private sectors)
This represents a 9,130% expansion in eight years. Services now requiring Aadhaar include: banking (400+ million accounts linked), [26] mobile phones (1+ billion SIM cards), income tax filing, property registration, marriage registration, vehicle registration, and Covid-19 vaccination records. [45]
Quantified function creep: UK digital identity
The UK's approach demonstrates how sectoral requirements create de facto mandatory systems without explicit mandates:
- 2014: Right to Rent scheme launched (voluntary, 3 pilot areas)
- 2016: Right to Rent expanded nationwide (mandatory for all landlords)
- 2018: Right to Work digital verification launched
- 2022: GOV.UK One Login launched for government services
- 2024: 50+ government services require One Login, 8.3 million accounts created [46]
- 2025: Online Safety Act age verification creates comprehensive adult content access records via commercial vendors (Yoti) [41]
Facial recognition integration
Many digital identity systems integrate with facial recognition networks, enabling real-time identification and tracking:
- China: 700+ million surveillance cameras with facial recognition capabilities. [20] Citizens are identified in real-time when entering metro stations (Beijing Metro: 100% facial recognition coverage), airports, schools, and shopping malls. Integration with digital ID enables correlation of physical movements with online activities.
- India: National Automated Facial Recognition System (AFRS) launched 2020, with plans to integrate 1.2 million CCTV cameras across 29 states. [47] Combined with Aadhaar's biometric database (2.74 billion iris scans, facial photos for 1.37 billion people), this creates comprehensive real-time tracking capabilities.
- UK: Metropolitan Police's Live Facial Recognition (LFR) deployments (42 operations, 2023-2024) match faces against watchlists in real-time. [48] While not yet integrated with One Login, technical infrastructure exists for future integration.
Social credit and behavioral monitoring
China's Social Credit System demonstrates how digital identity can enable comprehensive behavioral monitoring and social control. The system tracks citizens' financial transactions, social media posts, legal compliance, and behavioral patterns. [9] Documented impacts include:
- Travel restrictions: 23 million high-speed rail trips denied (2018-2021), 17.5 million flight bookings blocked. [21]
- Education exclusions: Children of low-credit parents barred from private schools.
- Employment discrimination: 3.5 million people on "dishonest debtors" list face employment restrictions. [49]
- Internet throttling: Low-credit individuals experience slower internet speeds and reduced access to services.
Function creep patterns across countries
Function creep follows predictable patterns regardless of country or initial privacy intentions:
- Voluntary → De facto mandatory: Systems launch as optional, then become required through service integration (India: 13 → 1,200 use cases; UK: voluntary → employment/housing requirements).
- Limited purpose → Comprehensive tracking: Single-purpose systems expand to multi-sector surveillance (Estonia: 600 → 2,900 X-Road services).
- Government-only → Private sector integration: Systems designed for government services expand to commercial surveillance (eIDAS 2.0 explicitly allows private sector use; India's Aadhaar now required by telecom, banks, e-commerce).
- Authentication → Behavioral monitoring: Identity verification systems evolve into behavioral tracking (China's ID integration with Social Credit System; India's authentication logs tracking 29-36 billion requests annually).
9. Encryption and end-to-end security: the eIDAS 2.0 controversy
Digital identity systems rely on encryption and public key infrastructure (PKI) to secure credentials and prevent forgery. However, government interventions in encryption standards, particularly the EU's eIDAS 2.0 Article 45, threaten to undermine the security foundations of digital identity while expanding state surveillance capabilities.
How digital ID encryption works
Digital identity credentials rely on public key cryptography: governments issue digitally-signed credentials using private keys, and verifiers check signatures using corresponding public keys. For this system to work securely, users must trust that public keys genuinely belong to legitimate issuers, not impostors or malicious actors.
Web browsers maintain "trust stores" (lists of trusted Certificate Authorities) that vouch for key ownership. When you visit a government site to download your digital ID, your browser checks that the site's certificate was issued by a trusted CA. This prevents man-in-the-middle attacks where malicious actors impersonate legitimate services.
eIDAS 2.0 Article 45: mandatory browser trust for government CAs
The EU's eIDAS 2.0 regulation (effective September 2023, implementation deadline September 2026) includes Article 45, which mandates that web browsers must trust government-designated Certificate Authorities from all 27 EU member states without the security audits and transparency requirements that apply to commercial CAs. [29, 30]
What Article 45 requires:
- Mandatory trust: Browsers (Chrome, Firefox, Safari, Edge) must accept certificates from government-designated "Qualified Web Authentication Certificates" (QWACs) without independent security review
- No transparency logs: Unlike commercial CAs (which must log all certificates to public Certificate Transparency logs), government QWACs can issue certificates in secret [30]
- Legal liability shield: Governments and their designated CAs are immune from legal liability for mis-issuance, even if certificates are used for surveillance or impersonation [29]
- Compliance deadline: September 2026: browsers must implement or face EU market access restrictions
Security and surveillance risks
Security researchers, browser vendors (Mozilla, Google, Apple), and digital rights organisations (EFF, Mozilla Foundation) have warned that Article 45 creates catastrophic surveillance and security vulnerabilities: [30, 31]
- Man-in-the-middle attacks: Government CAs can issue certificates for any domain (google.com, facebook.com, signal.org) and browsers must accept them as valid. This enables governments to impersonate websites, intercept encrypted traffic, and inject surveillance malware while browsers display "secure connection" indicators. [30]
- Cross-border surveillance: Because all browsers must trust all 27 EU member state CAs, Hungary's government could issue certificates to spy on German citizens, Poland's government could surveil French activists, etc. There is no mechanism to limit which countries' CAs can issue certificates for which domains.
- No accountability: Commercial CAs that mis-issue certificates face removal from browser trust stores (e.g., Symantec was distrusted in 2018 after governance failures). Article 45 prohibits browsers from distrusting government CAs, even after proven security failures. [29]
- Secret surveillance orders: Unlike Certificate Transparency (which publicly logs all certificates), government QWACs can be issued in secret. Intelligence agencies can compel CAs to issue fake certificates for surveillance targets without public disclosure. [31]
Mozilla and browser vendor resistance
Mozilla (Firefox), Google (Chrome), Apple (Safari), and 300+ security researchers signed an open letter opposing Article 45 in October 2023, warning that it "undermines decades of web security progress" and creates "government backdoors into encrypted communications." [30, 31]
Current status (October 2025): The eIDAS 2.0 regulation passed despite opposition. Browser vendors face a dilemma: implement Article 45 (compromising security) or refuse (losing EU market access). Mozilla has stated it will implement "under protest" while advocating for legislative changes. Google and Apple have not publicly committed to compliance. [71]
Impact on digital identity security
If browsers implement Article 45, digital identity wallets (including EU Digital Identity Wallets mandated by eIDAS 2.0) become vulnerable to state-level impersonation attacks:
- Government could issue fake certificates for digital wallet providers, intercepting credential downloads
- Man-in-the-middle attacks could modify credentials in transit (e.g., adding surveillance tags invisible to users)
- Cross-border surveillance: any EU member state could spy on citizens of other member states accessing digital ID services
The paradox: eIDAS 2.0 aims to create privacy-preserving digital identity (via selective disclosure wallets) while simultaneously mandating encryption backdoors that undermine the security foundations those wallets depend on.
What users can do
- Monitor browser compliance: Check if your browser implements Article 45. Mozilla tracks this at blog.mozilla.org/security/eidas-article-45
- Use browser extensions: Tools like "Certificate Patrol" and "Cert Spotter" alert when certificates change unexpectedly, detecting potential man-in-the-middle attacks
- VPN with DNS-over-HTTPS: VPNs with DoH prevent ISPs from redirecting traffic to government-controlled servers for surveillance injection
- Advocate for Article 45 repeal: Contact MEPs demanding Article 45 removal or amendment to require Certificate Transparency logging for government CAs
10. Rights impacts: work, housing, education, and financial access
Digital identity systems fundamentally impact citizens' rights to work, housing, education, and financial access. These impacts reveal how identity verification requirements can restrict access to essential services and enable discrimination.
Right to work: employment verification
Digital identity systems increasingly require employment verification, creating barriers for undocumented workers and enabling employer surveillance. The UK's Right to Work scheme requires employers to verify employees' eligibility digitally, creating employment application databases.
India's Aadhaar system requires employment verification for many jobs, while China's system enables comprehensive monitoring of workers' activities and compliance with regulations.
Right to housing: landlord verification
Digital verification requirements for housing create barriers for vulnerable populations and enable discrimination. The UK's Right to Rent scheme requires landlords to verify tenants' immigration status digitally, creating housing application databases and potential discrimination against foreign nationals.
Similar systems exist in other countries, where digital identity requirements for housing create barriers for marginalized populations and enable surveillance of housing patterns.
Right to education: enrollment requirements
Digital identity systems increasingly require verification for school enrollment, creating barriers for immigrant and stateless children. India's Aadhaar system requires verification for school enrollment, while other countries implement similar requirements.
These requirements can exclude children from education based on their parents' documentation status or immigration history, violating fundamental rights to education.
Financial access: banking and benefits
Digital identity systems often require verification for banking and benefits access, creating financial surveillance capabilities. India's Aadhaar system requires verification for bank accounts and government benefits, while China's system enables comprehensive financial monitoring.
These requirements can exclude people from financial services based on their identity verification status, creating barriers to economic participation and enabling surveillance of financial activities.
11. The case for digital identity: efficiency vs surveillance
Proponents of digital identity systems argue these tools deliver measurable benefits: reduced fraud, streamlined government services, financial inclusion for the unbanked, and modernized infrastructure. These are legitimate objectives. The question is not whether digital ID can achieve these goals, but whether privacy-preserving architectures can achieve them WITHOUT building surveillance infrastructure.
The efficiency and inclusion case
- Financial inclusion: The World Bank estimates 850 million people lack any official ID, excluding them from banking, employment, and government services. [2] India's Aadhaar enabled 310 million previously-unbanked Indians to open bank accounts between 2014-2020. [72] Digital ID can reduce barriers to financial services when implemented with privacy safeguards.
- Fraud reduction: Estonia's e-ID reduced tax fraud by an estimated 25% between 2005-2015 through automated cross-checking of income declarations. [73] India's Aadhaar eliminated an estimated 55 million "ghost beneficiaries" (fake welfare recipients) saving ₹90,000 crore ($12 billion) annually. [74] Digital verification prevents duplicate claims and identity theft, but surveillance is not required to achieve this.
- Service delivery efficiency: Estonia delivers 99% of government services online with average transaction time of 3 minutes (vs 2-4 hours for paper-based systems). [6] Digital ID streamlines bureaucracy and reduces administrative costs without requiring centralised surveillance databases.
- Border security and immigration management: Digital ID systems enable faster, more accurate identity verification at borders, reducing passport fraud and illegal immigration. The EU's Entry/Exit System (EES, launching 2024) uses biometric verification to track border crossings, detecting visa overstays and fraudulent documents.
- Pandemic response: During COVID-19, digital ID systems enabled rapid vaccine certificate distribution (EU Digital COVID Certificate covered 600+ million people). [75] Digital credentials allowed contact tracing while theoretically preserving privacy (though implementation often failed this standard).
Why efficiency doesn't require surveillance
The evidence from Estonia, Germany, and privacy-preserving COVID certificate systems (Switzerland's decentralised model) proves that efficiency, fraud reduction, and inclusion are compatible with privacy-by-design. The key distinction:
Surveillance vs Privacy-Preserving Digital ID:
- Surveillance model: Central database stores all identity data; government tracks every service access; cross-service correlation enables profiling (India Aadhaar, China, UK One Login)
- Privacy model: Distributed data exchange queries only needed attributes; users see audit logs of all access; no central profiling database (Estonia X-Road, Germany eID, Switzerland COVID cert)
Both models achieve fraud reduction and service efficiency. The difference is policy choice, not technical limitation. Centralised surveillance systems are not more efficient; they are more controllable.
The steelman: when is comprehensive logging justified?
Advocates for surveillance-capable systems argue that comprehensive audit trails are necessary for:
- National security: Tracking suspected terrorists or foreign agents via their interactions with government services, banking, and travel
- Organized crime prevention: Detecting money laundering, benefit fraud rings, and human trafficking through cross-service correlation
- Epidemic control: Contact tracing for disease outbreaks requires comprehensive location and social interaction tracking
- Child protection: Preventing minors' access to harmful content requires age verification with enforcement mechanisms
Counterargument: These scenarios require targeted surveillance with judicial oversight (warrants, proportionality tests, sunset clauses), not universal surveillance of entire populations. The current trajectory enables warrantless profiling of billions of people:
- India's 29-36 billion annual Aadhaar authentications log every ration collection, SIM card purchase, and bank transaction for 1.37 billion people without warrants. [16]
- China's Social Credit System tracks 1.4 billion citizens' travel, purchases, social media, and employment without judicial review. [9]
- UK's One Login logs 20 million users' access to tax, benefits, healthcare, and housing services without independent oversight. [5]
Estonia demonstrates that targeted fraud detection works with distributed architectures; universal logging is policy preference, not operational necessity. When Estonia's Tax Board suspects fraud, they request specific queries via X-Road (with logs visible to the citizen). [11] Surveillance is not eliminated; it is made transparent and accountable.
The bottom line
Digital identity systems can reduce fraud, enable financial inclusion, streamline services, and modernize government without building universal surveillance infrastructure. The global pattern of choosing centralised, surveillance-capable systems over privacy-preserving alternatives reflects political preferences for population control, not technical or cost constraints. Privacy-by-design systems exist, cost no more to build, and deliver equivalent outcomes. The question is political will.
12. Can digital IDs be privacy-preserving? Evidence from implementation
The evidence from global implementation suggests that while digital identity systems can theoretically be designed with privacy-by-design principles, practical implementation consistently favors surveillance capabilities over privacy protections.
Technical feasibility vs political reality
Privacy-enhancing technologies exist and are technically feasible, as demonstrated by Estonia's distributed data exchange and Germany's hardware-backed keys. However, political and economic incentives consistently favor centralised, surveillance-capable systems.
Governments prefer centralised systems for control and efficiency, while service providers prefer comprehensive data collection for business purposes. These incentives create pressure toward surveillance-by-design rather than privacy-by-design.
Function creep inevitability
Even well-designed privacy-preserving systems show tendencies toward surveillance expansion over time. Estonia's system, while privacy-preserving, has expanded to include more services and data collection over time.
The UK's approach demonstrates how voluntary systems can become de facto mandatory through sectoral requirements, while India's Aadhaar shows how limited-use systems can expand to comprehensive surveillance.
Cultural and legal factors
Cultural attitudes toward privacy and legal frameworks significantly impact implementation outcomes. Countries with strong privacy cultures and legal protections (like Switzerland and Germany) implement more privacy-preserving systems.
However, even in privacy-conscious countries, digital identity systems tend toward surveillance capabilities over time, suggesting that technical architecture alone cannot prevent function creep.
9. Policy frameworks and legal protections
Legal frameworks and policy choices significantly impact the privacy implications of digital identity systems. Strong data protection laws and independent oversight can mitigate surveillance risks, while weak frameworks enable comprehensive monitoring.
Data protection laws and enforcement
Countries with strong data protection laws and independent enforcement agencies implement more privacy-preserving digital identity systems. The EU's GDPR provides strong privacy protections, while countries like Kenya have strengthened data protection laws in response to digital identity concerns.
However, even strong legal frameworks cannot prevent function creep if technical architecture enables surveillance capabilities. Legal protections must be combined with privacy-by-design technical choices.
Independent oversight and transparency
Independent oversight bodies can monitor digital identity systems and prevent abuse. Estonia's Data Protection Inspectorate provides oversight of the e-ID system, while other countries have similar independent agencies.
Transparency requirements can also help prevent surveillance abuse by requiring public reporting of system usage and data access patterns.
Constitutional and human rights protections
Constitutional protections for privacy and human rights can limit digital identity surveillance capabilities. Courts in countries like Kenya and Jamaica have struck down digital identity programmes that violated privacy rights.
However, these protections are only effective if courts are willing to enforce them and if technical systems can be designed to comply with privacy requirements.
12. Implementation timeline: 2024-2025 global rollouts
Digital identity systems are expanding rapidly worldwide, with major rollouts scheduled for 2024-2026. This timeline tracks key implementation milestones and their privacy implications.
| Date | Country/Region | Milestone | Privacy Impact |
|---|---|---|---|
| Sept 2023 | European Union | eIDAS 2.0 regulation effective [3] | Mandates browser trust for government CAs (surveillance risk) |
| May 2024 | Australia | Digital ID Act 2024 passed [43] | Allows private vendors as accredited providers |
| Oct 2024 | United Kingdom | Online Safety Act age verification begins [41] | Commercial vendors (Yoti) track adult content access |
| May 2025 | United States | Real ID Act full enforcement [32] | State-level fragmentation continues (13 states with mDL) |
| Sept 2025 | European Union | eIDAS 2.0 browser compliance deadline [29] | Browsers must trust government CAs (Mozilla/Google objections) |
| 2026 | European Union | All 27 member states must offer EDIW [3] | Cross-border recognition enables EU-wide tracking |
| 2026 | India | Aadhaar integration with AFRS facial recognition [47] | 1.2M cameras + 1.37B biometric records = real-time tracking |
| 2026-2027 | United Kingdom | One Login expansion to local services | De facto mandatory for housing, employment, benefits |
Key trends in global rollouts
- Accelerating timelines: Digital ID systems that took 5-10 years to design (Estonia 2002, India 2009) are now being deployed in 2-3 years (Australia 2024, UK One Login 2022-2024). Faster deployment means less time for privacy impact assessments and public debate.
- Commercial vendor dominance: Governments increasingly outsource implementation to private vendors (Yoti, Onfido, ID.me, IDEMIA), creating commercial surveillance infrastructure that operates outside government oversight.
- Cross-border integration: eIDAS 2.0 mandates cross-border recognition across 27 EU countries, enabling surveillance across jurisdictions. This creates pressure for non-EU countries to adopt compatible systems for international interoperability.
- Facial recognition integration: Multiple countries (India, China, UK) are integrating digital ID with facial recognition networks, creating real-time tracking capabilities beyond traditional identity verification.
13. The path forward: recommendations for privacy-by-design
While digital identity systems present significant privacy risks, evidence from successful implementations suggests that privacy-by-design approaches can work when combined with strong legal frameworks and cultural commitment to privacy protection.
Technical recommendations
- Implement distributed data exchange rather than centralised databases
- Use zero-knowledge proofs and selective disclosure mechanisms
- Enable on-device processing and local storage where possible
- Implement strong encryption and access controls
- Provide users with transparency and control over their data
- Design systems to minimize data collection and retention
Legal and policy recommendations
- Establish strong data protection laws with independent enforcement
- Require privacy impact assessments for all digital identity systems
- Implement independent oversight and transparency requirements
- Provide opt-out mechanisms for non-essential services
- Establish clear limits on data sharing and surveillance use
- Create accountability mechanisms for system abuse
Cultural and social recommendations
- Foster public awareness of digital identity privacy risks
- Support civil society organisations monitoring digital identity systems
- Encourage academic research on privacy-preserving identity systems
- Promote international cooperation on privacy standards
- Develop educational programmes on digital rights and privacy
Conclusion
Digital identity systems can be designed with privacy-by-design principles, as demonstrated by successful implementations in Estonia and Germany. However, achieving privacy-preserving digital identity requires technical architecture choices, legal frameworks, and cultural commitment to privacy protection. Without these elements, digital identity systems inevitably enable surveillance and function creep, regardless of initial intentions.
The global evidence suggests that privacy-preserving digital identity is possible but requires proactive design choices and ongoing vigilance to prevent surveillance expansion. Countries implementing digital identity systems must choose between convenience and surveillance, as the two are fundamentally incompatible in practice.
16. Citizen action toolkit: defending privacy rights globally
Digital ID surveillance is not inevitable. Citizens worldwide can push back through data access requests, technical countermeasures, and political pressure. Here's country-specific guidance for protecting your rights.
For Aadhaar users (India)
1. Request your Aadhaar authentication history
UIDAI provides a portal to view every authentication request made against your Aadhaar number:
- Visit: resident.uidai.gov.in/aadhaar-auth-history
- Login with Aadhaar + OTP
- Download last 6 months of authentication logs (who verified, when, what service)
- Report unauthorized authentications via UIDAI complaint portal
2. Use VPN to prevent location tracking
Many Aadhaar-linked services (banking apps, telecom, e-commerce) log your IP address alongside authentication. VPN prevents location-based profiling. Recommended: NordVPN, ProtonVPN, Surfshark (all have India servers).
3. Request biometric lock (if fingerprints worn/damaged)
If you've experienced authentication failures due to worn fingerprints: Visit nearest Aadhaar Enrollment Centre to update biometrics or request biometric lock (enables OTP-only authentication). Biometric lock prevents fingerprint/iris authentication failures while preserving access.
4. File RTI requests for AFRS integration
India's Automated Facial Recognition System (AFRS) is integrating with Aadhaar biometrics. File Right to Information Act requests asking:
- Which states have integrated AFRS with UIDAI database?
- How many facial recognition matches have been made against Aadhaar photos?
- What is the retention period for AFRS-Aadhaar query logs?
- Submit via: rtionline.gov.in
For eIDAS wallet users (EU)
1. Monitor Article 45 browser implementation
Track whether your browser implements government CA trust mandate: Check Mozilla's transparency tracker at blog.mozilla.org/security/eidas-article-45
2. Demand selective disclosure enforcement
eIDAS 2.0 mandates selective disclosure (prove "over 18" without revealing exact birthdate). If services request full data: File GDPR complaint citing Article 5(1)(c) data minimization. Template:
GDPR Article 5 Complaint:
"[Service name] requested my full birthdate when eIDAS 2.0 Article 4(2) requires selective disclosure. I demand they implement age range verification ('over 18') instead of collecting unnecessary personal data. This violates GDPR Article 5(1)(c) data minimization."
Submit to your national Data Protection Authority: Find your DPA
3. Request EDIW audit logs
eIDAS 2.0 requires wallet providers to log all credential sharing. Exercise GDPR Article 15 rights: Request full audit log showing which services accessed your credentials, when, and what data was shared. If provider refuses, escalate to DPA.
For US mDL users (mobile driver's licences)
1. Request data deletion from IDEMIA/Thales
Many US states outsource mDL systems to IDEMIA or Thales. Request deletion of biometric data:
- IDEMIA: privacy@idemia.com (cite CCPA if California resident, state-specific privacy laws otherwise)
- Thales: dataprotection@thalesgroup.com
- Demand: Deletion of facial biometrics, document scans, and verification logs
2. Avoid ID.me for government services
If IRS, SSA, or state unemployment offers multiple verification options: Choose alternatives to ID.me (in-person verification, traditional mail). ID.me facial recognition has 1-in-12,500 false match rate for African Americans vs 1-in-48,000 for white users. [76]
3. Support state privacy legislation
Contact state legislators supporting digital ID privacy bills: Demand mDL systems include user-facing audit logs (like Estonia), prohibit cross-state data sharing without warrants, and mandate on-device biometric matching (no server upload).
For all digital ID users: universal privacy strategies
1. Use privacy-focused browsers and extensions
- Browser: Firefox with
privacy.resistFingerprintingenabled, or Brave - Extensions: uBlock Origin, Privacy Badger, Canvas Blocker, Certificate Patrol (alerts on certificate changes)
- DNS: Use encrypted DNS (Cloudflare 1.1.1.1, Quad9 9.9.9.9) to prevent ISP tracking
2. Join digital rights organisations
- India: Internet Freedom Foundation (internetfreedom.in) – Aadhaar litigation and AFRS opposition
- EU: EDRi (European Digital Rights) (edri.org) – eIDAS Article 45 resistance
- US: Electronic Frontier Foundation (eff.org) – ID.me and mDL privacy advocacy
- Global: Privacy International (privacyinternational.org) – cross-border digital ID research
3. Adopt privacy-preserving authentication alternatives
Where possible, use decentralised identity wallets instead of centralised systems: SpruceID (spruceid.com), Microsoft Entra Verified ID, or W3C Verifiable Credentials–compliant wallets. These enable selective disclosure without vendor tracking.
17. Global digital ID watchlist (2025-2026)
Key digital ID rollouts, legal challenges, and surveillance integration milestones to track over the next 18 months:
Q4 2025: eIDAS 2.0 wallet pilot launches
- Countries piloting: Germany, France, Spain, Italy (confirmed). Watch for adoption rates, selective disclosure implementation, and Article 45 browser compliance.
- What to track: Do wallets enforce data minimization? Can users see audit logs? Are government CAs trusted without transparency?
- Metric: Pilot enrollment rate. If <20% uptake, expect pressure for mandatory requirements.
Q1 2026: India AFRS + Aadhaar full integration
- What's happening: National Automated Facial Recognition System integrates 1.2 million CCTV cameras with Aadhaar's 1.37 billion facial photos. Real-time facial recognition becomes nationwide. [47]
- What to watch:
- How many states deploy AFRS-Aadhaar integration?
- Are facial recognition matches logged and accessible via RTI?
- Does Supreme Court intervention occur (privacy activists have filed petitions)?
- Action: File RTI requests for AFRS deployment statistics; support Internet Freedom Foundation legal challenges.
Sept 2026: eIDAS 2.0 Article 45 compliance deadline
- Browsers must decide: Implement government CA trust (compromising security) or refuse (losing EU market access). [29, 30]
- What to watch:
- Does Mozilla implement "under protest"?
- Do Google/Apple comply or challenge in EU courts?
- Are government CAs used for surveillance (track certificate issuance via Certificate Transparency logs)?
- Action: Sign EFF/Mozilla petitions demanding Article 45 amendment; monitor browser compliance at Mozilla tracker
2026: Australia Digital ID accreditation framework launch
- What's happening: Private vendors (Yoti, Onfido, local providers) apply for government accreditation to become "trusted" identity providers. [43]
- What to watch: Which vendors win accreditation? What are data retention requirements? Do accredited providers get immunity from privacy complaints?
- Action: File FOI requests for accreditation criteria; demand vendor privacy assessments be published.
Ongoing: Litigation and judicial review
- India Aadhaar challenges: Multiple pending Supreme Court cases on AFRS integration, private sector mandate enforcement, and biometric data breaches. Track via Internet Freedom Foundation.
- Kenya digital ID appeals: Following 2021 High Court ruling against mandatory enrollment, government appealing. Precedent for African digital ID rollouts. [13]
- Jamaica Constitutional Court: 2023 ruling struck down mandatory biometric enrollment. Government may attempt revised legislation. [14]
- eIDAS Article 45 legal challenges: Mozilla, EFF, and 300+ security researchers may file EU court challenges arguing Article 45 violates Charter of Fundamental Rights (Article 8 privacy). [30, 31]
Transparency metrics to demand via FOI/RTI requests
- Adoption rates: How many citizens have enrolled in digital ID systems? Break down by voluntary vs coerced (required for services).
- Authentication volumes: How many verifications per month? Which services generate most requests?
- Vendor contracts: Total value paid to Yoti, Onfido, IDEMIA, etc. Data retention terms. Breach notification clauses.
- Surveillance integration: Is facial recognition database linked to digital ID? How many matches per month?
- Exclusion statistics: How many authentication failures? How many people denied services due to biometric/document issues?
Pro tip: Coordinate FOI/RTI requests with digital rights organisations to aggregate data across jurisdictions. Mass-filed requests create media attention and political pressure for transparency.
18. References
References
- [1]404 Media (2024) 'AU10TIX Data Breach Investigation', 404 Media. Available at: (Accessed: 21 January 2026).
- [2]AAMVA (2024) 'Mobile Driver's Licence Implementation Status', AAMVA. Available at: https://www.aamva.org/mdl (Accessed: 21 January 2026).
- [3]Access Now (2024) 'Digital ID Systems Must Centre Human Rights', Access Now. Available at: https://www.accessnow.org/digital-id-human-rights (Accessed: 21 January 2026).
- [4]AU10TIX (2024) 'Privacy Policy', AU10TIX. Available at: https://www.au10tix.com/privacy-policy (Accessed: 21 January 2026).
- [5]Australian Government (2024) 'Digital ID Act 2024', Australian Government. Available at: https://www.digitalid.gov.au (Accessed: 21 January 2026).
- [6]AVPA (2023) 'On-Device Age Estimation: Technical Feasibility Study', Age Verification Providers Association. Available at: (Accessed: 21 January 2026).
- [7]Big Brother Watch (2024) 'Metropolitan Police Live Facial Recognition Report', Big Brother Watch. Available at: https://bigbrotherwatch.org.uk/campaigns/stop-facial-recognition (Accessed: 21 January 2026).
- [8]BSI Germany (2023) 'Technical Guideline TR-03127 eID Architecture', Federal Office for Information Security. Available at: https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/Technische-Richtlinien/TR-nach-Thema-sortiert/tr03127/TR-03127_node.html (Accessed: 21 January 2026).
- [9]BSI Germany (2024) 'Smart eID Implementation Guide', Federal Office for Information Security. Available at: (Accessed: 21 January 2026).
- [10]CAG India (2019) 'Performance Audit of Aadhaar', Comptroller and Auditor General of India. Available at: (Accessed: 21 January 2026).
- [11]China State Council (2022) 'Social Credit System Annual Report', China State Council. Available at: (Accessed: 21 January 2026).
- [12]Crunchbase (2024) 'Jumio Valuation and Revenue Estimates', Crunchbase. Available at: (Accessed: 21 January 2026).
- [13]DHS (2024) 'Real ID Act Enforcement Timeline', Department of Homeland Security. Available at: https://www.dhs.gov/real-id (Accessed: 21 January 2026).
- [14]e-Estonia (2018) 'ID-Card Vulnerability: Lessons Learned', e-Estonia. Available at: https://e-estonia.com/id-card-vulnerability (Accessed: 21 January 2026).
- [15]e-Estonia (2016) 'Tax Fraud Reduction via e-ID', e-Estonia Case Study. Available at: (Accessed: 21 January 2026).
- [16]EDPB (2024) 'eIDAS Implementation Analysis', European Data Protection Board. Available at: (Accessed: 21 January 2026).
- [17]EFF (2023) 'Coalition Letter Opposing eIDAS Browser Mandate', Electronic Frontier Foundation. Available at: https://www.eff.org/eidas-browser-mandate (Accessed: 21 January 2026).
- [18]Estonian DPI (2024) 'Annual Report 2023', Estonian Data Protection Inspectorate. Available at: (Accessed: 21 January 2026).
- [19]Estonian ISA (2024) 'X-Road Factsheet', Estonian Information System Authority. Available at: https://www.ria.ee/en/state-information-system/x-road (Accessed: 21 January 2026).
- [20]European Commission (2024) 'eIDAS 2.0 Regulation (EU) 2024/1183', EUR-Lex. Available at: https://eur-lex.europa.eu/eli/reg/2024/1183 (Accessed: 21 January 2026).
- [21]European Commission (2022) 'EU Digital COVID Certificate: One Year On', European Commission. Available at: (Accessed: 21 January 2026).
- [22]Federal Government of Germany (2024) 'AusweisApp2 GitHub Repository', GitHub. Available at: https://github.com/Governikus/AusweisApp2 (Accessed: 21 January 2026).
- [23]Human Rights Watch (2023) 'China's Algorithms of Repression', Human Rights Watch. Available at: https://www.hrw.org/report/2023/04/12/chinas-algorithms-repression (Accessed: 21 January 2026).
- [24]ID.me (2024) 'Company Statistics', ID.me. Available at: https://www.id.me/about (Accessed: 21 January 2026).
- [25]ID.me (2024) 'FY2023 Revenue Disclosure', ID.me Investor Presentation. Available at: (Accessed: 21 January 2026).
- [26]IDEMIA (2023) 'Annual Financial Report 2023', IDEMIA. Available at: (Accessed: 21 January 2026).
- [27]Internet Freedom Foundation (2019) 'Biometric Exclusion Report', IFF India. Available at: https://internetfreedom.in/aadhaar-exclusion-report (Accessed: 21 January 2026).
- [28]Internet Freedom Foundation (2021) 'National Automated Facial Recognition System Analysis', IFF India. Available at: https://internetfreedom.in/facial-recognition-system (Accessed: 21 January 2026).
- [29]IRS (2022) 'ID.me Facial Recognition Controversy', Congressional Testimony. Available at: (Accessed: 21 January 2026).
- [30]Jumio (2023) 'Billion Verifications Milestone', Jumio. Available at: https://www.jumio.com/news (Accessed: 21 January 2026).
- [31]Jumio (2024) 'Privacy Notice', Jumio. Available at: https://www.jumio.com/legal-information/privacy-notice (Accessed: 21 January 2026).
- [32]Medianama (2024) 'Aadhaar Integration Timeline', Medianama. Available at: https://www.medianama.com/aadhaar-timeline (Accessed: 21 January 2026).
- [33]Ministry of Health and Family Welfare (India) (2021) 'Aadhaar Integration for Covid Vaccination', Government of India. Available at: (Accessed: 21 January 2026).
- [34]Mozilla Corporation (2025) 'eIDAS Article 45 Implementation Status', Mozilla Security Blog. Available at: https://blog.mozilla.org/security/eidas-implementation-2025 (Accessed: 21 January 2026).
- [35]Mozilla Security Blog (2023) 'Concerns with eIDAS Article 45', Mozilla. Available at: https://blog.mozilla.org/security/eidas-article-45 (Accessed: 21 January 2026).
- [36]NIST (2019) 'Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects', National Institute of Standards and Technology. Available at: (Accessed: 21 January 2026).
- [37]Onfido (2024) 'Global Identity Verification Report', Onfido. Available at: https://onfido.com/resources (Accessed: 21 January 2026).
- [38]Onfido (2024) 'Data Retention Policy', Onfido. Available at: https://onfido.com/privacy (Accessed: 21 January 2026).
- [39]Open Society Justice Initiative (2021) 'Kenya High Court Digital ID Judgment', Open Society Justice Initiative. Available at: https://www.justiceinitiative.org/newsroom/kenya-digital-id-court-ruling (Accessed: 21 January 2026).
- [40]Privacy International (2024) 'Digital Identity and Exclusion', Privacy International. Available at: https://privacyinternational.org/explainer/digital-identity (Accessed: 21 January 2026).
- [41]Privacy International (2024) 'Commercial Digital Identity Verification', Privacy International. Available at: (Accessed: 21 January 2026).
- [42]Reliance Jio (2024) 'Aadhaar-Based eKYC Statistics', Reliance Jio. Available at: (Accessed: 21 January 2026).
- [43]Reserve Bank of India (2020) 'Financial Inclusion Report 2020', RBI. Available at: (Accessed: 21 January 2026).
- [44]Reuters (2022) 'China Social Credit System Blacklist Statistics', Reuters. Available at: (Accessed: 21 January 2026).
- [45]Statista (2024) 'eID Activation Rates in Germany', Statista. Available at: (Accessed: 21 January 2026).
- [46]Supreme Court of India (2018) 'Justice K.S. Puttaswamy v. Union of India (Aadhaar Judgment)', Supreme Court of India. Available at: https://www.sci.gov.in/supremecourt/2012/35071/35071_2012_Judgement_26-Sep-2018.pdf (Accessed: 21 January 2026).
- [47]TechCrunch (2024) 'Entrust Acquires Onfido for $650 Million', TechCrunch. Available at: https://techcrunch.com/entrust-onfido-acquisition (Accessed: 21 January 2026).
- [48]The Tribune (India) (2018) 'Aadhaar Database Breach: 1.1 Billion Records Exposed', The Tribune. Available at: (Accessed: 21 January 2026).
- [49]UIDAI (2025) 'Aadhaar Dashboard', Unique Identification Authority of India. Available at: https://uidai.gov.in/aadhaar_dashboard (Accessed: 21 January 2026).
- [50]UIDAI (2024) 'Authentication Transaction Dashboard', UIDAI. Available at: https://uidai.gov.in/ecosystem/authentication-devices-documents/auth-transaction-data.html (Accessed: 21 January 2026).
- [51]UK Cabinet Office (2024) 'GOV.UK One Login Statistics', GOV.UK. Available at: https://www.gov.uk/government/publications/govuk-one-login-statistics (Accessed: 21 January 2026).
- [52]UK Government (2023) 'GOV.UK Verify Closure Announcement', GOV.UK. Available at: https://www.gov.uk/government/publications/govuk-verify-closure (Accessed: 21 January 2026).
- [53]UK Ofcom (2024) 'Age Verification Guidance for Online Safety Act', Ofcom. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/age-verification (Accessed: 21 January 2026).
- [54]W3C (2024) 'Verifiable Credentials Data Model 2.0', W3C. Available at: https://www.w3.org/TR/vc-data-model-2.0 (Accessed: 21 January 2026).
- [55]Wall Street Journal (2024) 'China's Vast Surveillance Network', Wall Street Journal. Available at: (Accessed: 21 January 2026).
- [56]World Bank (2024) 'ID4D Global Dataset', World Bank ID4D. Available at: https://id4d.worldbank.org (Accessed: 21 January 2026).
- [57]World Bank (2023) 'Identification for Development Annual Report', World Bank. Available at: (Accessed: 21 January 2026).
- [58]Xinhua News (2017) 'Real-Name Internet Registration Requirements', Xinhua News. Available at: (Accessed: 21 January 2026).
- [59]Yoti (2024) 'Annual Verification Statistics', Yoti. Available at: https://www.yoti.com/business (Accessed: 21 January 2026).
- [60]Yoti (2025) 'Privacy Policy', Yoti. Available at: https://www.yoti.com/privacy-policy (Accessed: 21 January 2026).
- [61]Yoti Ltd (2023) 'Annual Report and Accounts 2023', Companies House. Available at: (Accessed: 21 January 2026).
