Researchers Advance Time Series Foundation Models with New Rating Framework and Billion-Scale Architecture

New papers introduce TSRating for data quality assessment and Timer-S1, an 8.3B-parameter time series foundation model achieving state-of-the-art results.

Researchers have published three significant advances in time series foundation models, addressing data quality assessment and model scaling.

According to a paper accepted at ICLR 2026 on arxiv.org, TSRating introduces a novel framework for rating time series data quality across diverse domains. The system leverages large language models’ knowledge to understand quality differences in time series data through prompted comparisons. TSRating trains a dedicated model called TSRater using meta-learning across nine distinct domains, employing signSGD for efficient inner-loop updates. Testing on eleven benchmark datasets across three time series tasks demonstrated that TSRating “outperforms baselines in terms of estimation accuracy, efficiency, and domain adaptability.”

Separately, arxiv.org published research on Timer-S1, a Mixture-of-Experts time series foundation model with 8.3 billion total parameters and 0.75 billion activated parameters per token. According to the paper, Timer-S1 introduces “Serial Scaling” across model architecture, dataset, and training pipeline, utilizing a Serial-Token Prediction objective. The model was trained on TimeBench, a curated corpus containing one trillion time points. When evaluated on the GIFT-Eval leaderboard, Timer-S1 “achieves state-of-the-art forecasting performance, attaining the best MASE and CRPS scores as a pre-trained model,” according to arxiv.org.

A third paper, to appear at ESANN 2026, demonstrated in-context learning for time series classification using foundation models, applying the method to vibration data for bearing health assessment without fine-tuning.