Time-Series Foundation Models
Summary
The time-series cluster covers forecasting, classification, reasoning, generation, and model compression, all under the constraint that temporal structure differs from text and vision.
What The Wiki Currently Believes
- CauKer shows synthetic causal time series can pretrain classification TSFMs.
- ChatTS aligns LLMs with multivariate time series using synthetic time-series/text data.
- Eidos moves forecasting pretraining from observation-space values to latent-space predictive dynamics.
- FlowRanks argues time-series Transformers have low-rank structure that enables compression.
- TimeOmni-1 formalizes time-series reasoning tasks.
- TimeOmni-VL unifies time-series understanding and generation through a vision-centric representation.
Evidence
The cluster suggests that time series need their own representation assumptions: causality, rank structure, numerical fidelity, temporal reasoning, and latent dynamics matter more explicitly than in standard language-model transfer.
Open Questions
- Which time-series tasks genuinely require reasoning rather than pattern matching?
- Can one model support forecasting fidelity, causal reasoning, and natural-language interaction?