Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

vlm safety

5 articles about vlm safety in AI news

Embedding distance predicts VLM typographic attack success (r=-0.93)

A new study shows that embedding distance between image text and harmful prompt strongly predicts attack success rate (r=-0.71 to -0.93). The researchers introduce CWA-SSA optimization to recover readability and bypass safety alignment without model access.

72% relevant

Nemotron ColEmbed V2: NVIDIA's New SOTA Embedding Models for Visual Document Retrieval

NVIDIA researchers have released Nemotron ColEmbed V2, a family of three models (3B, 4B, 8B parameters) that set new state-of-the-art performance on the ViDoRe benchmark for visual document retrieval. The models use a 'late interaction' mechanism and are built on top of pre-trained VLMs like Qwen3-VL and NVIDIA's own Eagle 2. This matters because it directly addresses the challenge of retrieving information from visually rich documents like PDFs and slides within RAG systems.

74% relevant

Former Li Auto Execs Launch Embodied AI Startup, Home Robot Due H1 2027

A new startup founded by former Li Auto executives is entering the embodied AI space, focusing on the home environment. Their first physical robot product is scheduled for release in the first half of 2027.

95% relevant

Qualcomm's Arduino Ventuno Q: A Powerhouse Single-Board Computer for the Next Wave of Physical AI

Qualcomm and Arduino have launched the Ventuno Q, a high-performance single-board computer designed specifically for robotics and physical AI applications. Powered by the Dragonwing IQ8 processor with a dedicated NPU and paired with a low-latency microcontroller, it enables complex, offline AI tasks like object tracking and gesture recognition for systems that interact with the real world.

80% relevant

Medical AI Breakthrough: New Method Teaches Vision-Language Models to Understand Clinical Negation

Researchers have developed a novel fine-tuning technique that significantly improves how medical vision-language models understand negation in clinical reports. The method uses causal tracing to identify which neural network layers are most responsible for processing negative statements, then selectively trains those layers.

70% relevant