Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

mac

30 articles about mac in AI news

Thinking Machines Unveils Native Multimodal Interaction Model

Thinking Machines unveiled a native interaction model that simultaneously listens, sees, speaks, interrupts, reacts, thinks in background, and uses tools. The approach targets the fundamental turn-based bottleneck of current AI assistants.

85% relevant

Cua Driver Open-Sourced: macOS Agent Control for Any App

Cua released Cua Driver as open-source, allowing agents like Claude Code and Codex to drive any macOS app through visual understanding and direct UI interaction.

85% relevant

Google Collaborates with Macy's to Develop 'Ask Macy's' AI Agent

According to Digital Commerce 360, Google is helping Macy's develop an AI agent called 'Ask Macy's'. This signals a deepening partnership between the retail giant and Google Cloud, aiming to deploy generative AI for customer service and product discovery. While full details are limited, the move represents a direct, large-scale application of conversational AI in luxury and general retail.

82% relevant

AutoZone, Home Depot, Macy’s, and Ulta Partner with Google for Agentic AI

AutoZone, Home Depot, Macy’s, and Ulta Beauty have entered into partnerships with Google Cloud to implement agentic AI solutions. These systems, built on Google's Gemini models, aim to handle complex, multi-step customer interactions. The move signals a shift from experimental chatbots to more autonomous, task-completing AI agents in retail.

100% relevant

Gur Singh Claims 7 M4 MacBooks Match A100, Calls Cloud GPU Training a 'Scam'

Developer Gur Singh posted that seven M4 MacBooks (2.9 TFLOPS each) match an NVIDIA A100's performance, calling cloud GPU training a 'scam' and advocating for distributed, consumer-hardware approaches.

77% relevant

AirTrain Enables Distributed ML Training on MacBooks Over Wi-Fi

Developer @AlexanderCodes_ open-sourced AirTrain, a tool that enables distributed ML training across Apple Silicon MacBooks using Wi-Fi by syncing gradients every 500 steps instead of every step. This makes personal device training feasible for models up to 70B parameters without cloud GPU costs.

95% relevant

Claude Code Runs 100% Locally on Mac via Native 200-Line API Server

A developer created a 200-line server that speaks Anthropic's API natively, allowing Claude Code to run entirely locally on M-series Macs at 65 tokens/second with no cloud dependency.

100% relevant

AI Developer Tools Shift to Mac-First, Excluding Windows/Linux Users

AI developers report a growing trend of cutting-edge AI tools being released exclusively or primarily for macOS, making it difficult for Windows and Linux users to access the latest innovations. This platform shift creates a hardware-based barrier to entry in the AI development ecosystem.

75% relevant

OpenAI Codex Update Adds macOS Agent, Browser, Memory; 3M Weekly Users

OpenAI released a major Codex update featuring background macOS automation, an in-app browser, persistent memory, and 90+ plugins. With 3M weekly users and nearly half of usage now non-coding, Codex is being repositioned as a general work agent.

100% relevant

Perplexity AI Launches 'Personal Computer' for Mac App Orchestration

Perplexity AI has released 'Personal Computer', a feature that integrates with its Mac app to securely orchestrate local files and applications. This move expands its AI assistant from web search to direct desktop interaction.

87% relevant

Mac Studio Runs 122B-Parameter AI Model Locally, Beats AWS on Cost

A developer demonstrated that a $3,999 Mac Studio can run a 122B-parameter AI model locally. Compared to a $5/hour AWS instance, the Mac pays for itself in roughly five weeks of continuous use.

85% relevant

Mac Studio AI Hardware Shortage Signals Shift to Cloud Rentals

Developers report a global shortage of high-memory Apple Silicon Macs, with 128GB Mac Studios unavailable worldwide. This pushes practitioners toward renting cloud H100 GPUs at ~$3/hr, marking a shift from the recent local AI trend.

85% relevant

OpenAI Rebrands Mac Codex App as Unified AI 'Superapp' Platform

OpenAI is transforming its Mac Codex app into a unified AI platform dubbed a 'Superapp,' integrating chat, agent workflows, and multimodal capabilities into a single interface. This move signals a shift from a specialized coding tool to a broader, user-facing desktop AI application.

85% relevant

Atomic Chat's TurboQuant Enables Gemma 4 Local Inference on 16GB MacBook Air

Atomic Chat's new TurboQuant algorithm aggressively compresses the KV cache, allowing models requiring 32GB+ RAM to run on 16GB MacBook Airs at 25 tokens/sec, advancing local AI deployment.

85% relevant

Hazmat Makes `--dangerously-skip-permissions` Actually Safe for Claude Code on macOS

A new tool, Hazmat, enables safe, fully autonomous Claude Code sessions on macOS by applying multiple OS-level security layers, making `--dangerously-skip-permissions` a viable productivity option.

92% relevant

Apple's AI Mac Mini Sells Out, Signaling Unprecedented Demand

Apple's latest Mac mini, featuring its new Apple Intelligence silicon, has sold out across retailers—a first for the typically high-availability product line. This signals overwhelming initial demand for Apple's push into on-device AI computing.

85% relevant

Open-Source AI Assistant Runs Locally on MacBook Air M4 with 16GB RAM, No API Keys Required

A developer showcased a complete AI assistant running entirely on a MacBook Air M4 with 16GB RAM, using open-source models with no cloud API calls. This demonstrates the feasibility of capable local AI on consumer-grade Apple Silicon hardware.

93% relevant

Gemma 4 26B A4B Hits 45.7 tokens/sec Decode Speed on MacBook Air via MLX Community

A community benchmark shows the Gemma 4 26B A4B model running at 45.7 tokens/sec decode speed on a MacBook Air using the MLX framework. This highlights rapid progress in efficient local deployment of mid-size language models on consumer Apple Silicon.

93% relevant

PicoClaw: $10 RISC-V AI Agent Challenges OpenClaw's $599 Mac Mini Requirement

Developers have launched PicoClaw, a $10 RISC-V alternative to OpenClaw that runs on 10MB RAM versus OpenClaw's $599 Mac Mini requirement. The Go-based binary offers the same AI agent capabilities at 1/60th the hardware cost.

87% relevant

Atomic Bot Launches Native App to Simplify OpenClaw (Clawdbot) Setup on macOS and Windows

Atomic Bot has released a native, open-source desktop application that simplifies the notoriously complex setup process for the OpenClaw AI agent. The app allows users to install and configure OpenClaw with one click on macOS and Windows, with Linux support planned.

85% relevant

Macy's Launches 'Ask Macy's' AI Conversational Shopping Assistant

Macy's has publicly launched 'Ask Macy's,' an AI-powered conversational shopping assistant designed to help users discover brands, trends, and receive personalized product recommendations. This follows an initial dark launch phase and represents a major department store's move into agentic AI for commerce.

95% relevant

Ollama Now Supports Apple MLX Backend for Local LLM Inference on macOS

Ollama, the popular framework for running large language models locally, has added support for Apple's MLX framework as a backend. This enables more efficient execution of models like Llama 3.2 and Mistral on Apple Silicon Macs.

85% relevant

Add Machine-Enforced Rules to Claude Code with terraphim-agent Verification Sweeps

Add verification patterns to your CLAUDE.md rules so they're machine-checked, not just suggestions. terraphim-agent now supports grep-based verification sweeps.

83% relevant

Developer Claims AI Search Equivalent to Perplexity Can Be Built Locally on a $2,500 Mac Mini

A developer asserts that the core functionality of Perplexity's $20-200/month AI search service can be replicated using open-source LLMs, crawlers, and RAG frameworks on a single Mac Mini for a one-time $2,5k hardware cost.

85% relevant

Atomic Chat Integrates Google TurboQuant for Local Qwen3.5-9B, Claims 3x Speed Boost on M4 MacBook Air

Atomic Chat now runs Qwen3.5-9B with Google's TurboQuant locally, claiming a 3x processing speed increase and support for 100k+ context windows on consumer hardware like the M4 MacBook Air.

85% relevant

Qwen3-TTS Added to mlx-tune, Enabling Full Qwen Model Fine-Tuning on Apple Silicon Macs

The mlx-tune library now supports Qwen3-TTS, making the entire Qwen model stack—including the new text-to-speech model—fine-tunable on Apple Silicon Macs. This expands local AI development options for researchers and developers.

85% relevant

Kimi 2.5's 1T Parameter MoE Model Runs on 96GB Mac Hardware via SSD Streaming

Developers have demonstrated that Kimi 2.5's 1 trillion parameter Mixture-of-Experts model can run on Mac hardware with just 96GB RAM by streaming expert weights from SSD, with only 32B parameters active per token.

85% relevant

Mark Cuban Deploys Mac Mini AI Agent to Automate Unsubscribing from AI-Generated Cold Emails

Investor Mark Cuban is training an AI agent on a Mac Mini to automatically unsubscribe from AI-generated cold emails in his Gmail. He frames it as a defensive countermeasure: 'You hit me with AI, I'll hit you with AI back right away.'

85% relevant

Stripe Proposes Machine Payments Protocol: HTTP 402 & Scoped Tokens for AI Agent Payments

Stripe's open Machine Payments Protocol (MPP) enables AI agents to autonomously discover, negotiate, and complete payments using HTTP 402 status codes and scoped payment tokens. It supports both fiat and crypto rails, eliminating the need for human-in-the-loop payment flows.

95% relevant

Qwen 3.5 397B-A17B MoE Model Runs on M3 Mac at 5.7 TPS with 5.5GB Active Memory via SSD Streaming

Developer Dan reportedly runs the 209GB Qwen 3.5 397B-A17B MoE model on an M3 Mac at ~5.7 tokens per second using only 5.5GB of active memory by quantizing and streaming weights from SSD.

85% relevant