Research Methodology

    Evidence Policy

    This page summarises how we gather and vet information before it shows up in a review, comparison, or score. It is intentionally high-level: the aim is to explain our guardrails without publishing a playbook for gaming the system.

    Core Principles

    The rules we apply to every piece of information before it influences a recommendation.

    Traceable sources only. Marketing copy, anonymous tips, or unverifiable screenshots never move scores on their own.

    Context over headlines. Legal documents, audit reports, and transparency updates are reviewed in full to avoid cherry-picked conclusions.

    Independent replication. Wherever possible we look for a second, unrelated confirmation before labelling something as resolved or verified.

    Evidence Buckets We Track

    High-level categories only — the detailed scoring rubric stays internal.

    Primary Disclosures

    Provider-owned documents such as privacy policies, security whitepapers, transparency reports, product changelogs, or configuration references. These tell us what a company says it does.

    Independent Checks

    Audits, court filings, regulatory actions, trusted investigative journalism, or academic work that can corroborate or contradict provider claims.

    Operational Signals

    Public bug reports, infrastructure incidents, third-party monitoring, or community-led testing that surface day-to-day performance and security reality.

    Provider Dialogue

    When a vendor supplies clarifications or private evidence, we log it. Items only influence scores after we can reference an externally verifiable record.

    How We Evaluate Evidence

    The scoring bands we use before evidence affects rankings (max 25 points per citation).

    1. Base Points by Source Type

    Primary disclosure • +4 pts

    Provider-issued policies, architecture briefs, transparency or investor filings with signed authorship.

    Regulatory / legal • +3.5 pts

    Court rulings, regulator notices, audit attestations, sanctions databases.

    Independent analysis • +3 pts

    Established security researchers, long-form journalism, academic publications.

    Community signal • +1 pt (cap)

    User reports, forum posts, social threads. Logged for follow-up; cannot uplift a score alone.

    2. Quality Grade (adds 1–5 pts)

    • A (5 pts). Current (≤6 months), primary-sourced or corroborated with artefacts.
    • B (4 pts). Recent, well-documented, clear authorship.
    • C (3 pts). Adequate detail but older or partially scoped.
    • D (2 pts). Minimal detail or dated; used for narrative context only.
    • E (1 pt). Ambiguous, unverifiable; logged for watchlist purposes.

    3. Confidence Modifiers

    Bonuses (each +0.5 to +1)

    • Independence: author has no commercial link to the provider (+1).
    • Scope: covers multiple technical or legal datapoints (+0.5).
    • Replication: two or more unrelated confirmations (+1).

    Penalties

    • Age: −0.5 per six months past the first year (floor −3).
    • Conflicts: −2 if promotional or paid placement masquerading as analysis.
    • Contradictions: −1 when evidence conflicts with a higher-grade source.

    4. Outcome Labels & Score Impact

    The final tally (0–25) determines how the claim influences provider trust metrics:

    • Verified (≥15 pts, replication required): Eligible to raise trust and feature scores.
    • Supported (10–14 pts): Appears in copy and dashboards; does not increase scores until replicated.
    • Watchlist (<10 pts or unresolved conflicts): Dampens confidence until higher-grade evidence is secured.

    Claim Review Flow

    Every new claim passes through the same lightweight checklist.

    1. Collect. Capture the source, date, and exact statement being made.

    2. Classify. Map the claim to our criteria taxonomy (logging, infrastructure, ownership, legal, etc.).

    3. Corroborate. Search for independent confirmation or contradiction. Uncorroborated items are tagged for monitoring.

    4. Record the outcome. Assign Verified / Supported / Watchlist labels, note any score impact, and log follow-up owners.

    Maintenance & Review Cadence

    Rolling updates. High-sensitivity criteria (breaches, jurisdiction changes, audits) are monitored continually and updated as soon as new evidence lands.

    Full refresh cycles. Every provider receives a structured evidence review at least twice a year. Providers with active watchlist items are reviewed more frequently.

    Retirement of stale data. Evidence older than 18 months is archived unless it still reflects an active contractual, legal, or technical state.

    Requesting Clarifications

    Providers, researchers, or readers can flag missing or outdated evidence by contacting us on @thevpnmatrix.

    Include the claim in question, a public source (if available), and any additional context. We respond with our findings once the evidence review is complete.

    We do not publish proprietary scoring spreadsheets or internal weighting logic. The qualitative guidance above is the public-facing summary.

    Cookie Preferences

    We use essential cookies for site functionality. Our analytics are cookie-free and don't require consent.

    Learn more
    Questions or concerns?

    Contact us via X, Substack, or see our Cookie Policy for full details.