support
30 articles about support in AI news
MLX-LM v0.9.0 Adds Better Batching, Supports Gemma 4 on Apple Silicon
Apple's MLX-LM framework released version 0.9.0 with enhanced server batching and support for Google's Gemma 4 model, improving local LLM inference efficiency on Apple Silicon. This update addresses a key performance bottleneck for developers running models locally on Mac hardware.
Better-Clawd Fork Adds OpenAI & OpenRouter Support to Claude Code
A new fork of Claude Code removes telemetry, adds OpenAI and OpenRouter support, and claims performance improvements—giving developers backend choice.
Ollama Now Supports Apple MLX Backend for Local LLM Inference on macOS
Ollama, the popular framework for running large language models locally, has added support for Apple's MLX framework as a backend. This enables more efficient execution of models like Llama 3.2 and Mistral on Apple Silicon Macs.
Alibaba's XuanTie C950 CPU Hits 70+ SPECint2006, Claims RISC-V Record with Native LLM Support
Alibaba's DAMO Academy launched the XuanTie C950, a RISC-V CPU scoring over 70 on SPECint2006—the highest single-core performance for the architecture—with native support for billion-parameter LLMs like Qwen3 and DeepSeek V3.
Coinbase CEO: AI Agents Now Write Over 50% of Code, Resolve 60% of Support Tickets
Coinbase CEO Brian Armstrong reports AI agents now generate over half of the company's code and resolve 60% of support tickets. The company equips these agents with stablecoin wallets for autonomous machine-to-machine payments.
CogSearch: A Multi-Agent Framework for Proactive Decision Support in E-Commerce Search
Researchers from JD.com introduce CogSearch, a cognitive-aligned multi-agent framework that transforms e-commerce search from passive retrieval to proactive decision support. Offline benchmarks and online A/B tests show significant improvements in conversion, especially for complex queries.
Support Tokens: The Hidden Mathematical Structure Making LLMs More Robust
Researchers have discovered a surprising mathematical constraint in transformer attention mechanisms that reveals a 'support token' structure similar to support vector machines. This insight enables a simple but powerful training modification that improves LLM robustness without sacrificing performance.
Anthropic's Claude Desktop Apps Gain Windows Support for Computer Use Feature
Anthropic has released Windows versions of Claude Code Desktop and Claude Cowork, bringing the 'computer use' feature—which allows the AI to interact with files and applications on a user's computer—to the platform. This follows the macOS release and marks a key step in Anthropic's desktop strategy.
PlayerZero Launches AI Context Graph for Production Systems, Claims 80% Fewer Support Escalations
AI startup PlayerZero has launched a context graph that connects code, incidents, telemetry, and tickets into a single operational model. The system, backed by CEOs of Figma, Dropbox, and Vercel, aims to predict failures, trace root causes, and generate fixes before code reaches production.
Skale Launches Desktop AI Agent Running on 300MB RAM with 11+ LLM Provider Support
Skale introduces a desktop AI agent that installs in 30 seconds on Windows and macOS, requiring only 300MB RAM. The tool offers browser automation, calendar integration, and autonomous task execution without terminal access.
How to Run Claude Code on Local LLMs with VibePod's New Backend Support
VibePod now lets you route Claude Code to Ollama or vLLM servers, enabling local model usage and cost savings.
The $850 Billion Question: Can OpenAI's Business Model Support Its Lofty IPO Ambitions?
OpenAI's potential IPO faces investor skepticism due to concerns about profitability timelines, high valuation multiples, and intense competition. The company reportedly won't be profitable until at least 2030 while burning significant cash.
AI Titans Unite: Sam Altman's Public Support for Anthropic Signals Industry-Wide Regulatory Push
OpenAI CEO Sam Altman has publicly declared solidarity with Anthropic amid government scrutiny, signaling unprecedented industry alignment on AI regulation. This coordinated stance could reshape how federal agencies approach oversight of rapidly advancing AI technologies.
Balancing Empathy and Safety: New AI Framework Personalizes Mental Health Support
Researchers have developed a multi-objective alignment framework for AI therapy systems that better balances patient preferences with clinical safety. The approach uses direct preference optimization across six therapeutic dimensions, achieving superior results compared to single-objective methods.
Open-Source 'Claude Cowork' Alternative Emerges with Local Voice & Agent Features
Developers have launched a free, open-source alternative to Anthropic's Claude Cowork. It runs 100% locally, supports voice, background agents, and connects to any LLM.
Jack Dorsey's Block Launches Free, Open-Source AI Coding Agent Goose
Jack Dorsey's Block has released Goose, a free and open-source AI agent for code execution and testing. It works with any LLM and supports MCP servers, offering a CLI and desktop app.
US Data Center Power Demand Hits 15 GW, Grid Constraints Emerge
US data center power demand reached 15 gigawatts in 2023, up from 11 GW in 2022. This rapid growth highlights a widening bottleneck: compute infrastructure is scaling faster than power delivery systems can support.
GPT4All Hits 77K GitHub Stars, Adds DeepSeek R1 for Free Local AI
The GPT4All project has surpassed 77,000 GitHub stars as it adds support for distilled DeepSeek R1 models, enabling reasoning-capable AI to run locally on consumer CPUs with zero API costs.
Boll & Branch Deploys OpenClaw AI Agent 'Tess' Across Operations, From Scheduling to Customer Insights
Bedding brand Boll & Branch created an AI agent named 'Tess' using open-source platform OpenClaw. Initially a scheduling assistant, Tess now integrates with Slack, Shopify, and marketing tools to generate customer reports and analyze social trends, supporting the brand's physical retail expansion.
Sipeed Launches PicoClaw, a Sub-$10 LLM Orchestration Framework for Edge
Sipeed unveiled PicoClaw, an open-source LLM orchestration framework designed to run on ~$10 hardware with less than 10MB RAM. It supports multi-channel messaging, tools, and the Model Context Protocol (MCP).
mlx-vlm v0.4.4 Launches with Falcon-Perception 300M, TurboQuant Metal Kernels & 1.9x Decode Speedup
The mlx-vlm library v0.4.4 adds support for TII's Falcon-Perception 300M vision model and introduces TurboQuant Metal kernels, achieving up to 1.9x faster decoding with 89% KV cache savings on Apple Silicon.
Anthropic's Claude Skills Implements 3-Layer Context Architecture to Manage Hundreds of Skills
Anthropic's Claude Skills framework employs a three-layer context management system that loads only skill metadata by default, enabling support for hundreds of specialized skills without exceeding context window limits.
Atomic Bot Launches Native App to Simplify OpenClaw (Clawdbot) Setup on macOS and Windows
Atomic Bot has released a native, open-source desktop application that simplifies the notoriously complex setup process for the OpenClaw AI agent. The app allows users to install and configure OpenClaw with one click on macOS and Windows, with Linux support planned.
AI-Powered 'Vibe-Coded' Companies Emerge as AI Collapses Traditional Staffing Models
Entrepreneur Matthew Gallagher used AI to automate core business functions—coding, marketing, support—allowing his company to scale without building a large managerial team. This demonstrates AI's current strength: drastically reducing coordination costs to enable solo or small teams to execute like corporations.
MemFactory Framework Unifies Agent Memory Training & Inference, Reports 14.8% Gains Over Baselines
Researchers introduced MemFactory, a unified framework treating agent memory as a trainable component. It supports multiple memory paradigms and shows up to 14.8% relative improvement over baseline methods.
Anthropic Signs AI Safety MOU with Australian Government, Aligning with National AI Plan
Anthropic has signed a Memorandum of Understanding with the Australian Government to collaborate on AI safety research. The partnership aims to support the implementation of Australia's National AI Plan.
Clawdbot AI Agent Autonomously Transcribes & Replies to Voice Messages Using Whisper API
A user demonstrated Clawdbot, an AI agent, autonomously handling a voice message: detecting its Opus format, converting it via FFmpeg, calling OpenAI's Whisper API for transcription, and generating a text reply. This showcases emerging agentic workflow automation without explicit voice feature support.
Claude Skills: How Anthropic's Context-Aware Workflow System Solves the bloated CLAUDE.md Problem
Claude Skills are modular, self-contained workflow packages that load only when triggered by user intent, solving the context bloat caused by monolithic CLAUDE.md files. They support automatic invocation, slash commands, and can bundle supporting documents.
mlx-vlm v0.4.2 Adds SAM3, DOTS-MOCR Models and Critical Fixes for Vision-Language Inference on Apple Silicon
mlx-vlm v0.4.2 released with support for Meta's SAM3 segmentation model and DOTS-MOCR document OCR, plus fixes for Qwen3.5, LFM2-VL, and Magistral models. Enables efficient vision-language inference on Apple Silicon via MLX framework.
Atomic Chat Integrates Google TurboQuant for Local Qwen3.5-9B, Claims 3x Speed Boost on M4 MacBook Air
Atomic Chat now runs Qwen3.5-9B with Google's TurboQuant locally, claiming a 3x processing speed increase and support for 100k+ context windows on consumer hardware like the M4 MacBook Air.