on device

30 articles about on device in AI news

Roboflow's RF-DETR Model Ported to Apple MLX, Enabling Real-Time On-Device Instance Segmentation

Roboflow's RF-DETR object detection model is now available on Apple's MLX framework, enabling real-time instance segmentation on Apple Silicon devices. This port unlocks new on-device visual analysis applications for robotics and augmented vision-language models.

89% relevant

ModelBest Hits $1B+ Valuation for On-Device Foundation Models

ModelBest, a Chinese developer of on-device AI foundation models, raised several hundred million RMB, reaching a valuation exceeding $1 billion. The funding will accelerate its push to deploy efficient models directly on smartphones and IoT devices.

98% relevant

Ethan Mollick: Gemma 4 Impressive On-Device, But Agentic Workflows Doubted

Wharton professor Ethan Mollick finds Google's Gemma 4 powerful for on-device use but is skeptical about its ability to execute true agentic workflows, citing limitations in judgment and self-correction.

75% relevant

Apple's On-Device Reranking Model for Private Visual Search: A Technical Breakdown

Analysis of Apple's Enhanced Visual Search system that uses multimodal features, geo-signals, and index debiasing to identify landmarks entirely on-device. This represents a significant advancement in privacy-preserving AI for visual recognition.

100% relevant

Apple Reportedly Gains Full Internal Access to Google's Gemini for On-Device Model Distillation

A report claims Apple's AI deal with Google includes full internal model access, enabling distillation of Gemini's reasoning into smaller, on-device models. This would allow Apple to build specialized, efficient AI without relying solely on cloud APIs.

95% relevant

KAIST Develops 'SoulMate' AI Chip for Real-Time, On-Device Personalization

KAIST researchers have developed a new AI semiconductor, 'SoulMate,' that enables real-time, on-device learning of user habits and preferences. The chip combines RAG and LoRA for instant personalization while consuming minimal power, aiming for commercialization by 2027.

70% relevant

Stanford's OpenJarvis: The Open-Source Framework Bringing Personal AI Agents to Your Device

Stanford researchers have released OpenJarvis, an open-source framework for building personal AI agents that operate entirely on-device. This local-first approach prioritizes privacy and autonomy while providing tools, memory, and learning capabilities.

100% relevant

Open-Source Project Unlocks Apple's On-Device AI for Any Device on Your Network

Perspective Intelligence Web, an open-source project, enables any device with a browser to access Apple's powerful on-device AI models running locally on a Mac. This MIT-licensed solution addresses privacy concerns by keeping all processing on your private network while extending Apple Intelligence capabilities to Windows, Linux, Android, and Chromebook devices.

85% relevant

Edge AI Breakthrough: Qwen3.5 2B Runs Locally on iPhone 17 Pro, Redefining On-Device Intelligence

Alibaba's Qwen3.5 2B model now runs locally on iPhone 17 Pro devices, marking a significant breakthrough in edge AI. This development enables sophisticated language processing without cloud dependency, potentially transforming mobile AI applications and user privacy paradigms.

85% relevant

Google's AI Edge Gallery Arrives on iPhone: A Privacy-First Revolution in On-Device Intelligence

Google AI Edge Gallery has launched on iOS, bringing true on-device function calling to iPhones for the first time. Powered by the compact 270M parameter FunctionGemma model, it enables natural voice commands to trigger phone actions like calendar events and flashlight toggles—completely offline.

75% relevant

Google's AICore Beta Enables On-Device Gemini Nano 4 Downloads for Android Phones

A new beta of Google's AICore system service enables users to download Gemini Nano 4 Full and Gemini Nano 4 Fast models directly onto compatible Android phones, including those with Snapdragon 8 Elite Gen 5 chips. This moves beyond pre-installed AI to user-initiated model management.

85% relevant

Apple's Private Cloud Compute: Leak Suggests 4x M2 Ultra Cluster for On-Device AI Offload

A leak suggests Apple's Private Cloud Compute for AI may be built on clusters of four M2 Ultra chips, potentially offering high-performance, private server-side processing for iPhone AI tasks. This would mark Apple's strategic move into dedicated, privacy-focused AI infrastructure.

85% relevant

Perplexity AI Launches On-Device Search Engine: Privacy-First AI Comes Home

A new privacy-first AI search engine called Perplexity AI now runs entirely on users' own hardware, eliminating cloud data transmission. This breakthrough represents a significant shift toward decentralized, secure AI processing that protects user queries from corporate surveillance.

85% relevant

The Laptop Agent Revolution: How 24B-Parameter Models Are Redefining On-Device AI

Liquid's LFM2-24B-A2B model runs locally on laptops, selecting tools in under 400ms. Its hybrid architecture enables sparse activation, making powerful AI agents practical for regulated industries and developers without cloud dependencies.

95% relevant

Apple's Neural Engine Jailbroken: Researchers Unlock On-Device AI Training Capabilities

A researcher has reverse-engineered Apple's private Neural Engine APIs to enable direct transformer training on M-series chips, bypassing CoreML restrictions. This breakthrough could enable battery-efficient local model training and fine-tuning without cloud dependency.

95% relevant

AI Model Decodes Silent Speech from Phone Sensors, No Microphone Needed

A new AI model can reconstruct speech by analyzing imperceptible facial movements captured by smartphone sensors, effectively enabling silent speech recognition without a microphone. This represents a significant leap in sensor fusion and on-device AI.

85% relevant

Efficient Universal Perception Encoder (EUPE) Family Challenges DINOv2

Researchers introduced the Efficient Universal Perception Encoder (EUPE), a family of compact vision models that achieve performance rivaling the larger DINOv2. This could enable high-quality visual understanding on resource-constrained devices.

85% relevant

Gemma 4 Ported to MLX-Swift, Runs Locally on Apple Silicon

Google's Gemma 4 language model has been ported to the MLX-Swift framework by a community developer, making it available for local inference on Apple Silicon Macs and iOS devices through the LocallyAI app.

87% relevant

Google DeepMind Unveils Next-Generation AI Tools and Android XR Platform at I/O 2024

Google's I/O 2024 keynote featured significant AI announcements from Google DeepMind, including new Gemini-powered tools and the official unveiling of Android XR. The extended reality operating system, developed in partnership with Samsung, represents a major expansion of Google's AI ecosystem into wearable devices.

90% relevant

Google Releases Fully Open-Source Gemma 4 AI Model for Local Device Deployment

Google has launched Gemma 4, a fully open-source AI model family available under the Apache 2.0 license. The release marks Google's re-entry into the competitive open-source AI landscape with models optimized for local deployment, including on mobile devices.

86% relevant

Storing Less, Finding More: Novelty Filtering Architecture for Cross-Modal Retrieval on Edge Cameras

A new streaming retrieval architecture uses an on-device 'epsilon-net' filter to retain only semantically novel video frames, dramatically improving cross-modal search accuracy while reducing power consumption to 2.7 mW. This addresses the fundamental problem of redundant frames crowding out correct results in continuous video streams.

82% relevant

Facebook's SAM 3 Vision Model Ported to Apple's MLX Framework, Enabling Real-Time Tracking on M3 Max

Facebook's Segment Anything Model 3 (SAM 3) has been ported to Apple's MLX framework, enabling real-time object tracking on an M3 Max MacBook Pro. This demonstrates efficient on-device execution of a foundational vision model without cloud dependency.

87% relevant

TurboQuant Ported to Apple MLX, Claims 75% Memory Reduction with Minimal Performance Loss

Developer Prince Canuma has successfully ported the TurboQuant quantization method to Apple's MLX framework, reporting a 75% reduction in memory usage with nearly no performance degradation for on-device AI models.

85% relevant

Perplexity Computer Gains Health App Integration, Enabling Wearable and Medical Record Access

Perplexity Computer now integrates with health apps, wearables, lab results, and medical records, positioning the AI device as a personal health assistant. This expands its utility beyond general web search and productivity.

85% relevant

Violoop's Hardware Bet: A New Frontier in AI Interaction Beyond the Screen

Hardware startup Violoop has secured multi-million dollar funding to develop the world's first 'physical-level AI Operator,' aiming to move AI interaction from purely digital interfaces to tangible, desktop-integrated hardware devices.

100% relevant

Mobile AI Revolution: Full LLMs Now Run Natively on Smartphones

A new React Native binding called llama rn enables developers to run full large language models like Llama, Qwen, and Mistral directly on mobile devices with just 4GB RAM. The framework leverages Metal and NPU acceleration for performance surpassing cloud APIs while maintaining complete offline functionality.

85% relevant

Chinese Engineers Develop Revolutionary Waist-Hip Exoskeleton to Revolutionize Load Carrying

Chinese engineers have created a novel waist-hip exoskeleton designed to carry 30–50% of a heavy backpack's load, supporting up to 30 kg. The device pushes the user forward, significantly reducing strain on the back and legs during demanding activities like long hikes or steep climbs.

87% relevant

RunAnywhere's MetalRT Engine Delivers Breakthrough AI Performance on Apple Silicon

RunAnywhere has launched MetalRT, a proprietary GPU inference engine that dramatically accelerates on-device AI workloads on Apple Silicon. Their open-source RCLI tool demonstrates sub-200ms voice AI pipelines, outperforming existing solutions like llama.cpp and Apple's MLX.

80% relevant

Google's TensorFlow 2.21 Revolutionizes Edge AI with Unified LiteRT Framework

Google has launched TensorFlow 2.21, marking LiteRT's transition to a production-ready universal on-device inference framework. This major update delivers faster GPU performance, new NPU acceleration, and seamless PyTorch edge deployment, effectively replacing TensorFlow Lite for mobile and edge applications.

75% relevant

Edge AI for Loss Prevention: Adaptive Pose-Based Detection for Luxury Retail Security

A new periodic adaptation framework enables edge devices to autonomously detect shoplifting behaviors from pose data, offering a scalable, privacy-preserving solution for luxury retail security with 91.6% outperformance over static models.

85% relevant