Quality dashboard
How clean is the data?
Trust through provenance. Every metric here is computed live from the production database.
Edges with ≥3 source citations
6.2%
5,093 total edges
Entities enriched
94.6%
Wikipedia · Crunchbase · arXiv
Average finding confidence
75.1%
2,850 findings
Hypothesis confirmation rate
0.0%
124 resolved hypotheses
Prediction accuracy
0.0%
174 resolved predictions
Mention coverage
82.2%
13,543 mentions tracked
📐 What these mean
- Edges with ≥3 sources
- A relationship between two entities (e.g., Anthropic ── partnered ── Amazon) is only considered firmly verified when at least 3 independent sources mention it.
- Entities enriched
- Percentage of entities that have been augmented with external metadata (Wikipedia summary, Crunchbase profile, arXiv author profile). Higher = richer entity pages.
- Average finding confidence
- The mean confidence (calibrated 0-100%) across every finding the lab has written. Calibrated = checked against later evidence.
- Hypothesis confirmation rate
- Of the hypotheses the lab raised that have since been resolved, how many were confirmed correct vs disconfirmed. The lab updates its weighting based on this.
- Prediction accuracy
- Falsifiable predictions that resolved as the lab forecast. This is the calibration baseline for all confidence scores.
Why we publish this.
The Living Graph is autonomous — but autonomy without measurement is hand-waving. Every cycle the brain runs writes its score. Every prediction it makes is either confirmed or disconfirmed. That feedback loop is how confidence gets calibrated. Read the method →