Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

qwen

30 articles about qwen in AI news

Qwen3.5-27B Gets Sparse Autoencoders: 81k Features Exposed

Qwen released Qwen-Scope, adding Sparse Autoencoders to Qwen3.5-27B, exposing 81k features across 64 layers for steerable inference.

85% relevant

Alibaba Opens Qwen AI App to External Partners via China Eastern Deal

Alibaba has opened its Qwen consumer AI app to its first external partner, China Eastern Airlines. Users can now manage the entire flight booking process through a single chat interface, expanding the app's real-world agentic capabilities beyond Alibaba's ecosystem.

74% relevant

Qwen3.6-27B: How to Run a 17GB Local Model That Beats 397B MoE on Coding Tasks

Qwen3.6-27B delivers flagship-level coding performance in a 55.6GB model that can be quantized to 16.8GB, making high-quality local coding assistance accessible.

100% relevant

DNL Method Finds 2 Bits That Crash ResNet-50, Qwen3-30B

Researchers introduced Deep Neural Lesion (DNL), a method to find critical parameters. Flipping just two sign bits reduced ResNet-50 accuracy by 99.8% and Qwen3-30B reasoning to 0%.

95% relevant

Alibaba Makes Qwen 3.6 Plus API-Only, Shifts Frontier Model to Paid Access

Alibaba has moved its most capable Qwen 3.6 Plus model to API-only access, while keeping the smaller Qwen 3.6 free. This aligns the company's strategy with OpenAI, Anthropic, and Google's paid frontier model approach.

89% relevant

Qwen 3.6 Released: Free, Open-Weights Model for Local AI Coding

Alibaba's Qwen team released Qwen 3.6, an open-weights AI model for local deployment. This provides a free, private alternative to ID-verified models like Anthropic's Mythos and OpenAI's Codex.

100% relevant

Qwen2.5-7B-Instruct 4-bit DWQ Model Released for Apple MLX

A developer has ported a 4-bit quantized Qwen2.5-7B-Instruct model to Apple's MLX framework. This makes the capable 7B model more efficient to run on Apple Silicon Macs.

77% relevant

Alibaba Qwen3.6-35B-A3B: 3B-Active Sparse MoE Hits 73.4% on SWE-Bench

Alibaba released Qwen3.6-35B-A3B, a sparse mixture-of-experts model with 35B total but only 3B active parameters. It shows significant gains over its predecessor, scoring 73.4% on SWE-bench Verified and beating Claude 3.5 Sonnet on several vision tasks.

97% relevant

Alibaba's Qwen Hits 1B Downloads, Captures 50% of Open-Source Market

A new report finds Alibaba Cloud's Qwen family of models captured over 50% of global open-source downloads as of March 2026, reaching nearly 1 billion cumulative downloads and solidifying Chinese dominance in open-source AI.

100% relevant

OpenBMB Launches VoxCPM 2, an Open-Source TTS Model Rivaling Qwen3-TTS

OpenBMB has launched VoxCPM 2, an open-source text-to-speech AI model from China. The release is positioned as a direct competitor to Alibaba's Qwen3-TTS, expanding the open-source TTS landscape.

91% relevant

AI Weekly: GPT-6 Rumors, DeepSeek V4 on Huawei, Anthropic Models, Qwen 3.6-Plus

A weekly roundup video aggregates major AI rumors and announcements, including unverified GPT-6 details, DeepSeek V4 reportedly running on Huawei hardware, and launches of Anthropic's Conway and Ultraplan and Alibaba's Qwen 3.6-Plus.

85% relevant

Alibaba's Qwen3.6-Plus Reportedly Under Half the Size of Kimi K2.5, Nears Claude Opus 4.5 Performance

Alibaba's Tongyi Lab announced Qwen3.6-Plus, a model reportedly under half the size of Moonshot's Kimi K2.5 while approaching Claude Opus 4.5 performance, signaling major efficiency gains in China's LLM race.

95% relevant

Qwen 3.6 Plus Demonstrates Full Web OS and Browser Automation in Single Session

A developer tested Qwen 3.6 Plus on a complex web OS workflow involving Python terminal operations, gaming, and browser automation, with the model handling all tasks seamlessly in a single session.

89% relevant

Alibaba Launches Qwen3.6-Plus with 1M-Token Context, Targeting AI Agent and Coding Workloads

Alibaba Cloud has launched Qwen3.6-Plus, a new multimodal large language model featuring a 1 million-token context length. The release is a strategic move to capture developer mindshare in the competitive AI agent and coding assistant market.

95% relevant

Gamma 31B Model Reportedly Outperforms Qwen 3.5 397B, Highlighting Efficiency Leap

A developer's social media post claims the Gamma 31B model outperforms the much larger Qwen 3.5 397B. If verified, this would represent a dramatic efficiency gain in large language model scaling.

85% relevant

Qwen3.5-Omni Demonstrates 'Audio-Visual Vibe Coding' as an Emergent Ability

Alibaba's Qwen3.5-Omni model appears to have developed an emergent ability to generate code from combined audio and visual inputs without specific training. This suggests a significant leap in multimodal reasoning for a model already positioned as a strong GPT-4 competitor.

85% relevant

Alibaba's Qwen 3.5 Omni Targets Western Market with Advanced Voice AI and Strategic Messaging

Alibaba's Qwen 3.5 Omni model features a robust voice AI that handles interruptions naturally, while its launch presentation signals a direct push to compete in Western markets as a cost-effective alternative.

85% relevant

Qwen 3.6 Plus Preview Launches on OpenRouter with Free 1M Token Context, Disrupting API Pricing

Alibaba's Qwen team has released a preview of Qwen 3.6 Plus on OpenRouter with a 1 million token context window, charging $0 for both input and output tokens. This directly undercuts paid long-context offerings from Anthropic and OpenAI.

97% relevant

Alibaba's Qwen Team Teases Qwen 3.6 Model, Signaling Major Open-Source LLM Update

Alibaba's Qwen team has teased the imminent release of Qwen 3.6, the next major version of its open-source large language model series. This follows the release of Qwen 2.5 in late 2024 and signals continued aggressive competition in the open-weight model space.

85% relevant

Alibaba's Qwen3.5-Omni Launches with Script-Level Captioning, Audio-Visual Vibe Coding, and Real-Time Web Search

Alibaba's Qwen team has released Qwen3.5-Omni, a multimodal model focused on interpreting images, audio, and video with new capabilities like script-level captioning and 'vibe coding'. It's open-access on Hugging Face but does not generate media.

85% relevant

Atomic Chat Integrates Google TurboQuant for Local Qwen3.5-9B, Claims 3x Speed Boost on M4 MacBook Air

Atomic Chat now runs Qwen3.5-9B with Google's TurboQuant locally, claiming a 3x processing speed increase and support for 100k+ context windows on consumer hardware like the M4 MacBook Air.

85% relevant

Qwen3-TTS Added to mlx-tune, Enabling Full Qwen Model Fine-Tuning on Apple Silicon Macs

The mlx-tune library now supports Qwen3-TTS, making the entire Qwen model stack—including the new text-to-speech model—fine-tunable on Apple Silicon Macs. This expands local AI development options for researchers and developers.

85% relevant

Alibaba's Qwen Team Announces More Open-Source Models Coming at ModelScope DevCon

Alibaba's Qwen team announced at the ModelScope DevCon in Nanjing that they will release more open-source Qwen models. This signals continued investment in their competitive open-weight LLM series.

85% relevant

Niu Technologies Demos AI-Powered Scooter Using Alibaba's Qwen 3.5 for Self-Balancing and Navigation

Chinese electric scooter maker Niu Technologies demonstrated a prototype that self-balances, moves, turns, and navigates autonomously using Alibaba's Qwen 3.5 model. The system is described as an L2-level intelligent driving assistance system, applying autonomous vehicle tech to micromobility.

85% relevant

Qwen 3.5 397B-A17B MoE Model Runs on M3 Mac at 5.7 TPS with 5.5GB Active Memory via SSD Streaming

Developer Dan reportedly runs the 209GB Qwen 3.5 397B-A17B MoE model on an M3 Mac at ~5.7 tokens per second using only 5.5GB of active memory by quantizing and streaming weights from SSD.

85% relevant

Semantic Invariance Study Finds Qwen3-30B-A3B Most Robust LLM Agent, Outperforming Larger Models

A new metamorphic testing framework reveals LLM reasoning agents are fragile to semantically equivalent input variations. The 30B parameter Qwen3 model achieved 79.6% invariant responses, outperforming models up to 405B parameters.

85% relevant

Qwen3.5 Benchmark Analysis Reveals Critical Performance Threshold at 27B Parameters

New benchmark comparisons of Alibaba's Qwen3.5 model family show a dramatic performance leap at the 27B parameter level, with smaller models demonstrating significantly reduced effectiveness across shared evaluation metrics.

85% relevant

Alibaba's Qwen3.5: The Efficiency Breakthrough That Could Democratize Multimodal AI

Alibaba has open-sourced Qwen3.5, a multimodal AI model that combines linear attention with sparse Mixture of Experts architecture to deliver high performance without exorbitant computational costs, potentially making advanced AI more accessible.

85% relevant

Alibaba's Qwen3-Coder-Next: The 80B Parameter Coding Agent That Only Uses 3B at Inference

Alibaba has unveiled Qwen3-Coder-Next, an 80B parameter coding agent that activates just 3B parameters during inference. It achieves competitive performance on SWE-Bench and Terminal-Bench while supporting a 256K context window.

85% relevant

Qwen's Tiny Titan: How a 2B Parameter Multimodal Model Challenges AI Scaling Assumptions

Alibaba's Qwen team has released Qwen2-VL-2B, a surprisingly capable 2-billion parameter multimodal model with native 262K context length, extensible to 1M tokens. This compact model challenges assumptions about AI scaling while offering practical long-context capabilities for resource-constrained environments.

95% relevant