Editorial Methodology
How Stories Are Selected
Transparency is not optional, it is the product. This page explains every step of how stories reach The Verity Ledger, from source selection to the scoring rubric that determines what you see.
22
Tracked Sources
91/100
Avg. Reliability Score
15 min
Update Frequency
Source Selection
Which outlets we monitor and why
We maintain a curated list of news sources that meet strict editorial criteria. Each source is evaluated on a 100-point reliability scale before being added to our monitoring system. Sources must demonstrate a consistent track record of accuracy, maintain a published corrections policy, and employ identifiable journalists with verifiable credentials.
Our source list spans the full spectrum of news coverage: major wire services (AP, Reuters) provide baseline factual reporting; investigative outlets (ProPublica, ICIJ, Bellingcat, Center for Public Integrity) surface stories that require deep research; and mainstream outlets (BBC, NPR, PBS, The Guardian, Time, Axios) provide the comparative baseline we use to identify coverage gaps.
Source Reliability Rubric (0–100)
Primary Evidence (+40)
Does the outlet regularly link to original court documents, leaked memos, raw data, or primary sources? Outlets that cite primary evidence score higher.
Accountability (+30)
Does the outlet have a published corrections policy? Is there a masthead of real journalists with verifiable identities and professional histories?
Independence (+20)
Is the reporting free from sponsored content, corporate ownership conflicts, or clear partisan lobbying ties? We assess financial independence and editorial separation.
Track Record (+10)
How has the outlet performed historically? We review retraction rates, fact-check ratings from independent organizations, and industry recognition.
Sources scoring below 75 are not included. Scores are reviewed quarterly.View all sources and their scores
Ingestion and Filtering
How articles enter the pipeline
Every 15 minutes, our system scans RSS feeds from all tracked sources. New articles are ingested and immediately pass through a multi-stage filtering pipeline designed to remove noise and ensure only substantive reporting reaches the site.
Duplicate Detection
Articles are checked against existing content using both URL matching and title similarity analysis (Levenshtein distance). Near-duplicates are automatically grouped rather than shown separately.
Content Quality Filter
Articles are assessed for substantive reporting value. Listicles, opinion columns without factual basis, sponsored content, and press releases are filtered out. Only original reporting and analysis passes through.
Story Grouping
Articles covering the same event from different outlets are automatically grouped together. Only the article from the most reliable source appears in the main feed; others are accessible via 'More About This Topic.'
Neutrality Check
All AI-generated summaries are written in a clinical, wire-service tone. Loaded adjectives, editorial framing, and opinion language are systematically removed. We present facts, not interpretations.
Coverage Classification
How we determine what's "overlooked"
This is the core of what makes The Verity Ledger different. Every article is classified into one of two categories: Mainstream (widely reported across major outlets) or Overlooked (significant but underreported). The classification is based on multiple signals:
| Signal | What It Measures | Weight |
|---|---|---|
| Source Type | Is this from an investigative outlet or a mainstream wire service? | High |
| Cross-Source Coverage | How many of our tracked sources are covering this same story? | High |
| Topic Category | Is this a topic that typically receives less mainstream attention (e.g., environmental policy, corporate accountability)? | Medium |
| Civic Impact | Does this story directly affect public health, finances, rights, or safety? | Medium |
You can see exactly which sources covered (or missed) any given story on the article page's Source Coverage Map, or explore the full picture on our Source Coverage Heatmap.
AI-Assisted Contextualization
How we add value without adding bias
For every article, we generate supplementary context using AI, but with strict editorial guardrails. The AI does not write news; it summarizes and contextualizes existing reporting. Every generated element follows these rules:
The Bottom Line
A single sentence that distills the story into plain English. Written at an 8th-grade reading level. No jargon, no loaded language. Answers: 'What happened and why does it matter?'
How This Affects You
A concrete, personal impact statement. Instead of abstract policy language, we translate the story into real-world consequences: 'This regulation change could increase your monthly energy bill by $15–$25.'
AI Summary
A factual summary of the original article, written in wire-service tone. Every claim is attributed to its source. Banned language includes loaded adjectives (shocking, alarming, devastating), editorial framing (secretly, suspiciously), and opinion statements.
What's Being Done
For overlooked stories, we identify what actions are being taken: legislative responses, legal challenges, community organizing, or institutional reforms. This prevents 'doom scrolling' by connecting problems to solutions.
Important: AI-generated content is always clearly labeled and supplements, never replaces, the original reporting. Every article links directly to its primary source. We encourage readers to read the original reporting in full.
Editorial Tone Standards
The language rules we enforce
We Never Use
- ×Loaded adjectives: shocking, alarming, disturbing, outrageous, horrifying, devastating, unprecedented
- ×Editorial framing: secretly, quietly, suspiciously, conveniently, ironically
- ×Opinion statements or value judgments about people, policies, or outcomes
- ×Conspiracy-adjacent language or unverified speculation
We Always Use
- ✓AP/Reuters wire-service tone: factual, measured, attribution-heavy
- ✓Source attribution: "According to [source]..." for every factual claim
- ✓Precise language: specific numbers, dates, and named entities over vague descriptions
- ✓Multiple perspectives when a story involves disputed claims or ongoing debate
