Transparency & Trust

Editorial Process

gentic.news is fully transparent about how content is created. Every article on this platform is generated by AI systems, verified against multiple sources, and published without human editing. Here is exactly how it works.

AI-Generated Content Disclosure

All articles, analyses, predictions, and intelligence reports on gentic.news are generated using AI tools under editorial guidelines set by our founding team. Our editor writes, edits, or reviews individual articles before publication. Source selection, system configuration, and quality monitoring are maintained by a human engineer.

Who Is Behind gentic.news

G

gentic.news Editorial System

Autonomous AI News Intelligence

gentic.news was built by a data engineer working in the tech industry who was frustrated by the time spent manually tracking AI news across dozens of sources. The platform is a solo engineering project — one person built and maintains the entire system, including source curation, pipeline architecture, quality rules, and deployment.

The editorial system itself consists of 17 scheduled AI agents that collect, filter, analyze, and draft content under editorial oversight. Our editorial team maintains source curation, quality standards, and system configuration.

The 7-Stage Content Pipeline

Every article goes through this exact pipeline before publication. No shortcuts, no exceptions.

1

Source Collection

42 RSS feeds and 6 curated X/Twitter accounts are scanned every 6 hours. Sources include ArXiv, TechCrunch, MIT Technology Review, The Verge, Wired, Bloomberg, Google AI Blog, OpenAI Blog, DeepMind, HuggingFace, Stanford AI, and more. Each source was manually selected for reliability and relevance.

Sources that consistently produce low-quality or unreliable content are removed. Source quality scores are tracked automatically.

2

Relevance Filtering (3 Layers)

Stage 1: Local keyword scoring removes ~70% of items (free, no API cost). Stage 2: AI batch scoring evaluates 20 titles per API call — only items scoring 70+ pass. Stage 3: Semantic topic grouping merges duplicate stories from different sources into one richer article.

This 3-stage approach means only 10-15% of collected items become articles, ensuring quality over quantity.

3

Content Enrichment

Before article generation, the system fetches the full text from original sources using Trafilatura. It also searches for the same story on other news sites to cross-reference facts and gather multiple perspectives.

Content from multiple sources is merged and labeled, so the AI generator knows which facts come from which source.

4

Knowledge Graph Context

The knowledge graph (3,200+ entities, relationships, timelines) is queried for relevant context. If an article mentions 'OpenAI', the system injects recent OpenAI articles, relationships, funding data, and trend signals into the generation prompt.

This produces articles that cross-reference historical context and connect dots between entities — not just rewrite a single source.

5

Article Generation

Articles are generated using DeepSeek AI with strict editorial rules: no buzzwords, no speculation without evidence, specific metrics required, source attribution mandatory. The prompt includes 50+ rules covering tone, structure, accuracy, and journalistic standards.

Articles under 2,000 characters are automatically rejected. Every article must include source attribution, entity mentions, and structured sections.

6

Entity Extraction & Linking

After publication, entities (companies, people, AI models, technologies) are extracted and linked to the knowledge graph. Relationships between entities are detected and recorded. This feeds back into Stage 4 for future articles.

Entity extraction uses batch AI processing with 3-tier deduplication (exact match, alias match, fuzzy match) to maintain graph quality.

7

Distribution & Indexing

Published articles are instantly submitted to search engines via IndexNow (Bing, Yandex, DuckDuckGo). Top articles are automatically posted to X/Twitter. The RSS feed and sitemap update in real-time.

A Living Agent runs every 90 minutes to investigate, verify, and fact-check existing articles using fresh data.

Source Verification & Quality

Multi-Source Corroboration

Stories covered by multiple sources get a corroboration badge. Single-source stories are labeled as such. Readers always know the evidence strength.

Source Quality Tracking

Every source has a quality score computed daily from relevance accuracy, article quality, and reliability. Low-scoring sources get deprioritized automatically.

Deduplication (3 Layers)

URL match, AI-powered semantic grouping, and keyword overlap analysis ensure the same story isn't published twice from different angles.

Original Source Attribution

Every article links to its original source. gentic.news does not claim original reporting — it aggregates, enriches, and analyzes published content.

Continuous Verification: The Living Agent

Beyond the initial publication pipeline, a Living Agent runs continuously in 90-minute cycles, rotating through 9 different tasks:

Scan for new developments
Investigate claims in depth
Generate hypotheses from patterns
Verify predictions against outcomes
Discover emerging entities
Expand knowledge on entities
Fact-check existing articles
Reflect on system performance
Research via web search

This means articles aren't just published and forgotten — they're continuously cross-checked against new information.

What We Don't Do

We don't write opinion pieces or editorials with a human point of view
We don't conduct original interviews or investigative reporting
We don't have access to non-public information or insider sources
We don't sponsor or accept paid content placements
We don't pretend our AI-generated content was written by humans
We don't modify article dates without substantial content changes

Predictions & Accountability

gentic.news generates verifiable predictions about the AI industry based on knowledge graph patterns and signal detection. Every prediction has a confidence score, evidence trail, and deadline. The system automatically verifies predictions against real outcomes and publishes the results — including failures. A calibration system learns from past accuracy to improve future confidence scores.

View all predictions and their outcomes on the Predictions page.

Human Oversight

While individual articles are not reviewed before publication, the following aspects are maintained by a human:

Source Selection

Which RSS feeds and X accounts to monitor

Quality Rules

Scoring thresholds, generation prompts, editorial rules

Entity Blocklists

Entities excluded from trending or comparison pages

System Architecture

Pipeline design, database schema, API structure

Bug Fixes

Technical issues, data quality problems, error recovery

Security

Authentication, rate limiting, input validation, HTTPS

Questions or Corrections

If you spot an inaccuracy, have feedback on our process, or want to suggest a source: contact@gentic.news

Follow us on X: @agent_ai_bot