sequence modeling

30 articles about sequence modeling in AI news

IAT: Instance-As-Token Compression for Historical User Sequence Modeling

Researchers propose Instance-As-Token (IAT), which compresses all features of each historical interaction into a unified embedding token, then applies standard sequence modeling. This approach outperforms state-of-the-art methods and has been deployed in e-commerce advertising, shopping mall marketing, and live-streaming e-commerce with substantial business metric improvements.

91% relevant

WeightCaster: How Sequence Modeling in Weight Space Could Solve AI's Extrapolation Problem

Researchers propose WeightCaster, a novel framework that treats out-of-support generalization as a sequence modeling problem in neural network weight space. This approach enables AI models to make plausible, interpretable predictions beyond their training distribution without catastrophic failure.

75% relevant

PRAGMA: Revolut's Foundation Model for Banking Event Sequences

A new research paper introduces PRAGMA, a family of foundation models designed specifically for multi-source banking event sequences. The model uses masked modeling on a large corpus of financial records to create general-purpose embeddings that achieve strong performance on downstream tasks like fraud detection with minimal fine-tuning.

72% relevant

SELLER: A New Sequence-Aware LLM Framework for Explainable Recommendations

Researchers propose SELLER, a framework that uses Large Language Models to generate explanations for recommendations by modeling user behavior sequences. It outperforms prior methods by integrating explanation quality with real-world utility metrics.

92% relevant

Beyond Sequence Generation: The Emergence of Agentic Reinforcement Learning for LLMs

A new survey paper argues that LLM reinforcement learning must evolve beyond narrow sequence generation to embrace true agentic capabilities. The research introduces a comprehensive taxonomy for agentic RL, mapping environments, benchmarks, and frameworks shaping this emerging field.

85% relevant

TME-PSR: A New Sequential Recommendation Model Unifies Time

Researchers propose TME-PSR, a model integrating personalized time patterns, multi-interest modeling, and explanation alignment for sequential recommendations. It shows improved accuracy and explanation quality with lower computational cost in experiments.

80% relevant

Tencent Launches 2025 Ad Algorithm Challenge with Massive All-Modality Recommendation Datasets

Tencent has launched an open competition and released two industrial-scale datasets (TencentGR-1M and TencentGR-10M) to advance generative recommender systems. This has spurred related research into debiasing techniques and novel reranking frameworks, moving the field toward more holistic, multi-modal user modeling.

87% relevant

Research Exposes Hidden Data Splitting in Sequential Recommendation Models, Questioning SOTA Claims

Researchers found that sub-sequence splitting (SSS), a data augmentation technique, is widely but covertly used in recent sequential recommendation models. When removed, model performance often plummets, suggesting many published SOTA results are misleading. The study calls for more rigorous and transparent evaluation standards.

82% relevant

SLSREC: A New Self-Supervised Model for Disentangling Long- and Short-Term User Interests in Recommendations

A new arXiv preprint introduces SLSREC, a self-supervised model that disentangles long-term user preferences from short-term intentions using contrastive learning and adaptive fusion. It outperforms state-of-the-art models on three benchmark datasets, addressing a core challenge in dynamic user modeling.

88% relevant

New Relative Contrastive Learning Framework Boosts Sequential Recommendation Accuracy by 4.88%

A new arXiv paper introduces Relative Contrastive Learning (RCL) for sequential recommendation. It solves a data scarcity problem in prior methods by using similar user interaction sequences as additional training signals, leading to significant accuracy improvements.

88% relevant

MMM4Rec: A New Multi-Modal Mamba Model for Faster, More Transferable Sequential Recommendations

Researchers propose MMM4Rec, a novel sequential recommendation framework using State Space Duality for efficient multi-modal learning. It claims 10x faster fine-tuning convergence and improved accuracy by dynamically prioritizing key visual/textual information over user interaction sequences.

90% relevant

VISTA: A Novel Two-Stage Framework for Scaling Sequential Recommenders to Lifelong User Histories

Researchers propose VISTA, a two-stage modeling framework that decomposes target attention to scale sequential recommendation to a million-item user history while keeping inference costs fixed. It has been deployed on a platform serving billions.

90% relevant

TimeSqueeze: A New Method for Dynamic Patching in Time Series Forecasting

Researchers introduce TimeSqueeze, a dynamic patching mechanism for Transformer-based time series models. It adaptively segments sequences based on signal complexity, achieving up to 20x faster convergence and 8x higher data efficiency. This addresses a core trade-off between accuracy and computational cost in long-horizon forecasting.

70% relevant

New Research Improves Text-to-3D Motion Retrieval with Interpretable Fine-Grained Alignment

Researchers propose a novel method for retrieving 3D human motion sequences from text descriptions using joint-angle motion images and token-patch interaction. It outperforms state-of-the-art methods on standard benchmarks while offering interpretable correspondences.

75% relevant

Amazon's T-REX: A Transformer Architecture for Next-Basket Grocery Recommendations

Amazon researchers propose T-REX, a transformer-based model for grocery basket recommendations. It addresses unique challenges like repetitive purchases and sparse patterns through category-level modeling and causal masking, showing significant improvements in offline/online tests.

90% relevant

Survey Paper 'The Latent Space' Maps Evolution from Token Generation to Latent Computation in Language Models

Researchers have published a comprehensive survey charting the evolution of language model architectures from token-level autoregression to methods that perform computation in continuous latent spaces. This work provides a unified framework for understanding recent advances in reasoning, planning, and long-context modeling.

85% relevant

MCLMR: A Model-Agnostic Causal Framework for Multi-Behavior Recommendation

Researchers propose MCLMR, a causal learning framework that addresses confounding effects in multi-behavior recommendation systems. It uses adaptive aggregation and bias-aware contrastive learning to improve preference modeling from diverse user interactions like views, clicks, and purchases.

86% relevant

Exploration Space Theory: A Formal Framework for Prerequisite-Aware Recommendation Systems

Researchers propose Exploration Space Theory (EST), a lattice-theoretic framework for modeling prerequisite dependencies in location-based recommendations. It provides structural guarantees and validity certificates for next-step suggestions, with potential applications beyond tourism.

95% relevant

Vanast Unifies Virtual Try-On & Animation in Single-Step CVPR 2026 Framework

A CVPR 2026 paper introduces Vanast, a unified model for virtual try-on and human image animation in one step. It aims to preserve identity and enable zero-shot interpolation, streamlining a traditionally complex process.

85% relevant

Kronos AI Outperforms Leading Time Series Models by 93% on Candlestick Data

Researchers from Tsinghua University released Kronos, an open-source foundation model trained on 12 billion candlestick records from 45 exchanges. It reportedly achieves 93% higher accuracy than leading time series models for price and volatility forecasting, requiring no fine-tuning.

95% relevant

InCoder-32B-Thinking Hits 81.3% on LiveCodeBench, Trained on Chip & Kernel Traces

InCoder-32B-Thinking, a 32B parameter model trained on execution traces from chip design, GPU kernels, and embedded systems, scores 81.3% on LiveCodeBench V5 and an 84% compile pass rate on CAD-Coder.

92% relevant

CoDiS: A Causal Framework for Cross-Domain Sequential Recommendation

A new arXiv paper introduces CoDiS, a framework for Cross-Domain Sequential Recommendation that uses causal inference to disentangle domain-shared and domain-specific user preferences while addressing context confounding and gradient conflicts. It outperforms state-of-the-art baselines on three real-world datasets.

82% relevant

Kuaishou's Dual-Rerank: A New Industrial Framework for High-Stakes

Researchers from Kuaishou introduce Dual-Rerank, a framework designed for industrial-scale generative reranking. It addresses the dual dilemma of structural trade-offs (AR vs. NAR models) and optimization gaps (SL vs. RL) through Sequential Knowledge Distillation and List-wise Decoupled Reranking Optimization. A/B tests on production traffic show significant improvements in user satisfaction and watch time with reduced latency.

82% relevant

Privacy-First Personalization: How Synthetic Data Powers Accurate Recommendations Without Risk

A new approach uses GANs or VAEs to generate synthetic customer behavior data for training recommendation engines. This eliminates privacy risks and regulatory burdens while maintaining performance, as demonstrated by a German bank's 73% drop in data exposure incidents.

82% relevant

AI Firms Target Biotech for High-Impact, High-Margin Applications

A trend analysis notes AI companies are shifting focus to biotech, where accurate prediction models can be monetized through drug discovery and synthetic biology, creating a new competitive frontier.

85% relevant

FAVE: A New Flow-Based Method for One-Step Sequential Recommendation

A new arXiv paper introduces FAVE, a framework for sequential recommendation that uses a two-stage training strategy to learn a direct trajectory from a user's history to the next item. It promises high accuracy and dramatically faster inference, making it suitable for real-time applications.

73% relevant

DrugPlayGround Benchmark Tests LLMs on Drug Discovery Tasks

A new framework called DrugPlayGround provides the first standardized benchmark for evaluating large language models on key drug discovery tasks, including predicting drug-protein interactions and chemical properties. This addresses a critical gap in objectively assessing LLMs' potential to accelerate pharmaceutical research.

95% relevant

The RealReal CMO Samantha McCandless on Resale Math, Vintage Bulgari, and Her Go-To Sneakers

In a personal shopping profile, The RealReal's Chief Merchandising Officer, Samantha McCandless, explains her 'resale math'—funding new purchases by consigning items—and her passion for vintage jewelry and beauty staples, offering a firsthand look at the executive mindset fueling the luxury resale market.

76% relevant

Goal-Aligned Recommendation Systems: Lessons from Return-Aligned Decision Transformer

The article discusses Return-Aligned Decision Transformer (RADT), a method that aligns recommender systems with long-term business returns. It addresses the common problem where models ignore target signals, offering a framework for transaction-driven recommendations.

90% relevant

DeepSeek's HISA: Hierarchical Sparse Attention Cuts 64K Context Indexing Cost

DeepSeek researchers introduced HISA, a hierarchical sparse attention method that replaces flat token scanning. It removes a computational bottleneck at 64K context lengths without requiring any model retraining.

85% relevant