time series
30 articles about time series in AI news
Google's TimesFM: 200M-Param Foundation Model for Zero-Shot Time Series
Google released TimesFM, a 200M-parameter foundation model for time series forecasting that works without training on user data. It's now available open-source and as a product inside BigQuery.
Google Open-Sources TimesFM: A 100B-Point Time Series Foundation Model for Zero-Shot Forecasting
Google has open-sourced TimesFM, a foundation model for time series forecasting trained on 100 billion real-world time points. It requires no dataset-specific training and can generate predictions instantly for domains like traffic, weather, and demand.
TimeSqueeze: A New Method for Dynamic Patching in Time Series Forecasting
Researchers introduce TimeSqueeze, a dynamic patching mechanism for Transformer-based time series models. It adaptively segments sequences based on signal complexity, achieving up to 20x faster convergence and 8x higher data efficiency. This addresses a core trade-off between accuracy and computational cost in long-horizon forecasting.
TimeGS: How Computer Graphics Techniques Are Revolutionizing Time Series Forecasting
Researchers have introduced TimeGS, a novel AI framework that treats time series forecasting as a 2D rendering problem. By adapting Gaussian splatting techniques from computer graphics, the approach achieves state-of-the-art performance while maintaining temporal continuity.
StaTS AI Model Revolutionizes Time Series Forecasting with Adaptive Noise Schedules
Researchers introduce StaTS, a diffusion model that learns adaptive noise schedules and uses frequency guidance for superior time series forecasting. The approach addresses key limitations in existing methods while maintaining efficiency.
Google's TimesFM Foundation Model: A New Paradigm for Time Series Forecasting
Google Research has open-sourced TimesFM, a 200 million parameter foundation model for time series forecasting. Trained on 100 billion real-world time points, it demonstrates remarkable zero-shot forecasting capabilities across diverse domains without task-specific training.
Google's TimesFM: The Zero-Shot Time Series Model That Works Without Training
Google has open-sourced TimesFM, a foundation model for time series forecasting that requires no training on specific datasets. Unlike traditional models, it can make predictions directly from historical data, potentially revolutionizing forecasting across industries.
CausalTimePrior: The Missing Link for AI That Understands Time and Cause
Researchers have introduced CausalTimePrior, a new framework to generate synthetic time series data with known interventions. This breakthrough addresses a critical gap in training AI models to understand causality over time, paving the way for foundation models in time series analysis.
KairosVL: The AI That Understands Time's Hidden Stories
Researchers have developed KairosVL, a novel AI framework that combines time series analysis with semantic reasoning using a two-round reinforcement learning approach. This breakthrough enables AI to understand not just numerical patterns but also the contextual meaning behind temporal data, significantly improving decision-making and generalization capabilities.
STAR-Set Transformer: AI Finally Makes Sense of Messy Medical Data
Researchers have developed a new transformer architecture that handles irregular, asynchronous medical time series by incorporating temporal and variable-type attention biases, outperforming existing methods on ICU prediction tasks while providing interpretable insights.
Time-Series AI Learns to Adapt on the Fly: New Framework Eliminates Fine-Tuning for Unseen Tasks
Researchers have developed ICTP, a framework that equips time-series foundation models with in-context learning capabilities, allowing them to adapt to completely new tasks without fine-tuning. This breakthrough improves performance on unseen tasks by 11.4% and represents a significant step toward more flexible, efficient AI systems for real-world time-series applications.
Sequen Raises $16M to Commercialize 'Large Event Model' Tech for Real-Time Personalization
Sequen, a startup founded by ex-Etsy AI leader Zoë Weil, has secured $16M in Series A funding. Its 'RankTune' platform offers API access to real-time ranking and personalization models, aiming to bring TikTok/Instagram-grade infrastructure to major consumer brands without invasive tracking.
Roboflow's RF-DETR Model Ported to Apple MLX, Enabling Real-Time On-Device Instance Segmentation
Roboflow's RF-DETR object detection model is now available on Apple's MLX framework, enabling real-time instance segmentation on Apple Silicon devices. This port unlocks new on-device visual analysis applications for robotics and augmented vision-language models.
TensorFlow Playground Interactive Demo Updated for 2026, Enabling Real-Time Neural Network Visualization
The TensorFlow Playground, an educational web tool for visualizing neural networks, has been updated. Users can now adjust hyperparameters and watch the model train and visualize decision boundaries in real-time.
Elon Musk Predicts 'Vast Majority' of AI Compute Will Be for Real-Time Video
Elon Musk states that real-time video consumption and generation will consume most AI compute, highlighting a shift from text to video as the primary medium for AI processing.
Facebook's SAM 3 Vision Model Ported to Apple's MLX Framework, Enabling Real-Time Tracking on M3 Max
Facebook's Segment Anything Model 3 (SAM 3) has been ported to Apple's MLX framework, enabling real-time object tracking on an M3 Max MacBook Pro. This demonstrates efficient on-device execution of a foundational vision model without cloud dependency.
Google Announces Gemini 3.1 Flash Live: A New Real-Time AI Model
Google has announced Gemini 3.1 Flash Live, a new model variant focused on real-time, low-latency AI interactions. The announcement came via a developer tweet, indicating a potential push for faster, more responsive AI applications.
Awesome Finance Skills: Open-Source Plugin Adds Real-Time Market Analysis to AI Agents
Developer open-sources Awesome Finance Skills, a plug-and-play toolkit that gives AI agents real-time financial data access, sentiment analysis, and automated research report generation. The MIT-licensed package works with Claude Code, OpenClaw, and other popular agent frameworks.
NVIDIA Nemotron Ultra: Details Emerge on Upcoming Open-Source LLM Series
NVIDIA is developing the Nemotron Ultra series of open-source large language models. The project, described as 'insane' and 'underrated,' is generating early hype among AI researchers.
Apple's Neural Engine Jailbroken: Researchers Unlock Full Training Capabilities on M-Series Chips
Security researchers have reverse-engineered Apple's Neural Engine, bypassing private APIs to enable full neural network training directly on ANE hardware. This breakthrough unlocks 15.8 TFLOPS of compute previously restricted to inference-only operations across all M-series devices.
Beyond Blue Books: How Real-Time Market Intelligence AI is Transforming Luxury Asset Valuation
duPont REGISTRY Group's deployment of real-time AI analytics for luxury vehicles demonstrates a scalable model for dynamic pricing, authentication, and market forecasting of high-value collectibles. This approach directly translates to luxury retail for limited editions, vintage items, and exclusive collections.
Alibaba's Qwen 3.5 Series Redefines AI Efficiency: Smaller Models, Smarter Performance
Alibaba's new Qwen 3.5 model series challenges Western AI dominance with four specialized models that deliver superior performance at dramatically lower computational costs. The series targets OpenAI's GPT-5 mini and Anthropic's Claude Sonnet 4.5 while proving smaller architectures can outperform larger predecessors.
Qwen 3.5 Medium Series: Alibaba's Strategic Push for Efficient AI Dominance
Alibaba's Qwen team releases the Qwen 3.5 Medium model series, featuring four specialized variants optimized for different performance profiles. The models demonstrate remarkable efficiency gains through architectural improvements and better training methodologies.
AI Forecasters Revise AGI Timeline: Key Milestones Pulled Forward to 2029-2030 After Recent Model Progress
A significant update from AI forecasters indicates key AGI milestones have been pulled forward, with the median prediction for AGI arrival shifting from 2032 to 2029-2030. This revision follows rapid progress in recent model capabilities, particularly in reasoning and tool use.
Alibaba's Qwen3.5-Omni Launches with Script-Level Captioning, Audio-Visual Vibe Coding, and Real-Time Web Search
Alibaba's Qwen team has released Qwen3.5-Omni, a multimodal model focused on interpreting images, audio, and video with new capabilities like script-level captioning and 'vibe coding'. It's open-access on Hugging Face but does not generate media.
Granola Secures $125M Series C at $1.5B Valuation, Pivots from Meeting Notes to Enterprise AI Agent Platform
Granola raised $125M led by Index Ventures, valuing the AI meeting notetaker at $1.5B. The company is expanding into an enterprise AI platform with new APIs and workspaces, responding to user demand for agent integration.
Qualcomm NPU Shows 6-8x OCR Speed-Up Over CPU in Mobile Workload
A benchmark shows Qualcomm's dedicated NPU processing OCR workloads 6-8 times faster than the device's CPU. This highlights the growing efficiency gap for AI tasks on mobile silicon.
OpenAI President Teases 'Spud' Model, Two Years of Research
OpenAI President Greg Brockman briefly mentioned an upcoming model codenamed 'Spud', stating it represents 'two years worth of research that is coming to fruition.' No technical details or release timeline were provided.
Zuckerberg: Big Tech Fails on AI Due to Disbelief, Not Skill
Mark Zuckerberg states that large companies fail to adopt transformative technologies like AI not due to a lack of skill, but from a cycle of disbelief. By the time they accept the new paradigm, their competitive edge is gone.
Google's AI Infrastructure Strategy: What Retail Leaders Should Watch in 2026
Google's evolving AI infrastructure and compute strategy, including data center investments and model compression techniques, will directly impact how retail brands deploy and scale AI applications by 2026. The company's focus on efficiency and real-time capabilities signals a shift toward more accessible, powerful retail AI tools.