Skip to content
gentic.news — AI News Intelligence Platform

ai at the edge

30 articles about ai at the edge in AI news

Google's Nano-Banana 2: The Edge AI Revolution That Puts 4K Image Generation in Your Pocket

Google has officially unveiled Nano-Banana 2, a specialized AI model delivering sub-second 4K image synthesis with advanced subject consistency entirely on-device. This breakthrough represents a strategic pivot toward edge computing, challenging the cloud-centric paradigm of current generative AI.

75% relevant

Trump's AI Energy Summit: Tech Giants Pledge to Self-Generate Power Amid Grid Concerns

Former President Donald Trump is convening Amazon, Google, Meta, Microsoft, xAI, Oracle, and OpenAI at the White House to sign a 'Rate Payer Protection Pledge,' committing them to generate or purchase their own electricity for new AI data centers, signaling a major shift in how tech's energy demands are addressed.

85% relevant

REPO: The New Frontier in AI Safety That Actually Removes Toxic Knowledge from LLMs

Researchers have developed REPO, a novel method that detoxifies large language models by erasing harmful representations at the neural level. Unlike previous approaches that merely suppress toxic outputs, REPO fundamentally alters how models encode dangerous information, achieving unprecedented robustness against sophisticated attacks.

75% relevant

Andrej Karpathy's LLM-Wiki Framework Solves AI Amnesia with Persistent Knowledge

Andrej Karpathy published a two-page framework called LLM-Wiki that transforms how AI systems handle accumulated knowledge. Instead of retrieving from raw documents each time, the AI compiles sources into its own structured wiki that persists across sessions.

85% relevant

Claude AI Prompts Claim to Build Hedge Fund-Level Trading Strategies

A prompt collection claims to enable Claude to build and backtest hedge fund-level trading strategies. The prompts aim to automate quantitative analysis tasks typically performed by high-paid analysts.

87% relevant

New Research Proposes FilterRAG and ML-FilterRAG to Defend Against Knowledge Poisoning Attacks in RAG Systems

Researchers propose two novel defense methods, FilterRAG and ML-FilterRAG, to mitigate 'PoisonedRAG' attacks where adversaries inject malicious texts into a knowledge source to manipulate an LLM's output. The defenses identify and filter adversarial content, maintaining performance close to clean RAG systems.

92% relevant

Microsoft's Satya Nadella Details Internal 'Lean for Knowledge Work' AI Initiative

Microsoft CEO Satya Nadella described the company's internal application of AI to streamline knowledge work, framing it as a 'Lean' manufacturing-style efficiency push for cognitive tasks. The initiative focuses on using AI to reduce process friction and improve productivity across internal operations.

85% relevant

Anthropic's Stealth Education Revolution: Free AI Curriculum Democratizes Technical Knowledge

Anthropic has launched a comprehensive, completely free AI curriculum designed to make technical AI education accessible to everyone. The curriculum covers fundamentals to advanced topics without tuition, waitlists, or prerequisites, potentially reshaping how AI knowledge is distributed.

85% relevant

Multimodal Knowledge Graphs Unlock Next-Generation AI Training Data

Researchers have developed MMKG-RDS, a novel framework that synthesizes high-quality reasoning training data by mining multimodal knowledge graphs. The system addresses critical limitations in existing data synthesis methods and improves model reasoning accuracy by 9.2% with minimal training samples.

80% relevant

GitNexus Revolutionizes Code Exploration: Browser-Based AI Transforms GitHub Repositories into Interactive Knowledge Graphs

A new tool called GitNexus transforms any GitHub repository into an interactive knowledge graph with AI chat capabilities, running entirely in the browser without backend infrastructure. This breakthrough enables developers to visualize and query complex codebases through intuitive graph interfaces and natural language conversations.

85% relevant

Omar Sarayra Builds LLM Artifact Generator for AI Knowledge Discovery

Omar Sarayra created a system that transforms dense LLM knowledge bases into consumable visual artifacts, like a pulse on HN AI discussions. He argues this format could become a new medium for staying current.

87% relevant

Google Launches AI Edge Eloquent: Free, Offline-First Dictation App on iOS

Google has quietly launched AI Edge Eloquent, a free, subscription-less dictation app for iOS. It uses a Gemma-based speech recognition model to process audio locally, removing filler words and self-corrections to produce cleaner text.

97% relevant

Zero-Shot Cross-Domain Knowledge Distillation: A YouTube-to-Music Case Study

Google researchers detail a case study transferring knowledge from YouTube's massive video recommender to a smaller music app, using zero-shot cross-domain distillation to boost ranking models without training a dedicated teacher. This offers a practical blueprint for improving low-traffic AI systems.

96% relevant

Future-Proof Your AI Search: Why Static Knowledge Bases Fail Luxury Retail

New research reveals AI retrieval benchmarks degrade over time as information changes. For luxury brands using AI for product recommendations and clienteling, this means static knowledge bases become stale, hurting customer experience and sales.

60% relevant

FAERec: A New Framework for Fusing LLM Knowledge with Collaborative Signals for Tail-Item Recommendations

A new paper introduces FAERec, a framework designed to improve recommendations for niche items by better fusing semantic knowledge from LLMs with collaborative filtering signals. It addresses structural inconsistencies between embedding spaces to enhance model accuracy.

88% relevant

How Anthropic's Team Uses Skills as Knowledge Containers (And What It Means For Your CLAUDE.md)

Learn how to use Claude Code skills not just for automation but as living knowledge bases, following patterns from Anthropic's own engineering team.

70% relevant

Understanding the Interplay between LLMs' Utilisation of Parametric and Contextual Knowledge: A keynote at ECIR 2025

A keynote at ECIR 2025 will present research on how Large Language Models (LLMs) balance their internal, parametric knowledge with external, contextual information. This is critical for deploying reliable AI in knowledge-intensive tasks where models must correctly use provided context, not just their training data.

70% relevant

Google DeepMind Maps AI Attack Surface, Warns of 'Critical' Vulnerabilities

Google DeepMind researchers published a paper mapping the fundamental attack surface of AI agents, identifying critical vulnerabilities that could lead to persistent compromise and data exfiltration. The work provides a framework for red-teaming and securing autonomous AI systems before widespread deployment.

89% relevant

Developer Ships LLM-Powered Knowledge Graph Days After Karpathy Tweet

Following a tweet by Andrej Karpathy, a developer rapidly built and released a working implementation of an LLM-powered knowledge graph on GitHub, showcasing the speed of open-source AI development.

87% relevant

Andrej Karpathy's Personal Knowledge Management System Uses LLM Embeddings Without RAG for 400K-Word Research Base

AI researcher Andrej Karpathy has developed a personal knowledge management system that processes 400,000 words of research notes using LLM embeddings rather than traditional RAG architecture. The system enables semantic search, summarization, and content generation directly from his Obsidian vault.

91% relevant

New Research Diagnoses LLMs' Struggle with Multiple Knowledge Updates in Context

A new arXiv paper reveals a persistent bias in LLMs when facts are updated multiple times within a long context. Models increasingly favor the earliest version, failing to track the latest state—a critical flaw for dynamic knowledge tasks.

78% relevant

Federated RAG: A New Architecture for Secure, Multi-Silo Knowledge Retrieval

Researchers propose a secure Federated Retrieval-Augmented Generation (RAG) system using Flower and confidential compute. It enables LLMs to query knowledge across private data silos without centralizing sensitive documents, addressing a major barrier for enterprise AI.

72% relevant

Claude Code Plugin 'Understand' Generates Interactive Knowledge Graphs from Codebases

A new Claude Code plugin called 'Understand' automatically analyzes any codebase to create an interactive knowledge graph. It enables developers to query code in plain English, visualize dependencies, and generate onboarding guides.

87% relevant

Knowledge-RAG v3.0: The Local RAG MCP Server That Finally Just Works

Knowledge-RAG v3.0 eliminates Docker/Ollama setup, adds hybrid search with cross-encoder reranking, and auto-indexes your docs—making private RAG in Claude Code a one-command install.

94% relevant

The Great AI Contamination: How 2022 Became the Digital Divide in Human Knowledge

AI researcher Ethan Mollick identifies 2022 as the pivotal year when AI began fundamentally altering human-generated content, creating what he calls 'ambient contamination' where AI influence permeates all digital information.

85% relevant

The Uncanny Valley of Truth: How AI Avatars Are Blurring Reality's Edge

AI avatars now replicate human speech patterns, facial expressions, and gestures with unsettling accuracy, creating synthetic personas indistinguishable from real people. This technological leap raises urgent questions about authenticity, trust, and the future of digital communication.

85% relevant

Sam Altman's Warning: The World Is Unprepared for What's Coming in AI

OpenAI CEO Sam Altman has issued a stark warning that the world is unprepared for the AI developments emerging from leading companies. His comments highlight the growing gap between internal industry knowledge and public readiness for transformative technologies.

85% relevant

OpenCode vs Claude Code: What the 2026 Comparison Means for Your CLI Workflow

A new competitor validates Claude Code's terminal-first philosophy, but Claude's mature MCP ecosystem and proven local execution capabilities remain key differentiators for developers.

100% relevant

LLMs Show 'Privileged Access' to Own Policies in Introspect-Bench, Explaining Self-Knowledge via Attention Diffusion

Researchers formalize LLM introspection as computation over model parameters, showing frontier models outperform peers at predicting their own behavior. The study provides causal evidence for how introspection emerges via attention diffusion without explicit training.

86% relevant

Boris Cherny's Claude Code Tips Are Now a Skill. Here Is What the Complete Collection Reveals.

A curated collection of expert Claude Code tips is now available as a shareable 'Skill,' revealing proven workflows for faster, more reliable agentic coding.

95% relevant