Position: What Can Large Language Models Tell Us about Time Series Analysis

Source

Core Claim

The paper argues that LLMs can expand time-series analysis beyond numeric prediction into modality switching, question answering, reasoning, and natural-language interaction. For this wiki, it is a position source for why time-series models may need language interfaces, not only better forecast heads.

Key Contributions

  • Maps time-series analysis tasks onto LLM-era capabilities such as prompting, agents, tool use, and multimodal alignment.
  • Reviews early approaches that adapt LLMs to time series through prompting, reprogramming, soft prompts, or lightweight adapters.
  • Frames time-series question answering and modality switching as important future interfaces.
  • Emphasizes trust, interpretability, and the practical integration of LLM technologies with time-series analysis.

Method Notes

This is not a new forecasting architecture. It is a position and survey-like paper, useful for organizing the space of LLM-assisted time-series analysis.

The paper should be read alongside context-aided forecasting work. A general LLM interface is broader than forecasting with essential textual context, but it also risks over-claiming if numeric calibration is not preserved.

Evidence And Results

The source is mostly conceptual. Its value is in synthesis and roadmap framing rather than a benchmark result. It names the main interface patterns: direct prompting over serialized values, LLM backbone adaptation, time-series encoders aligned to language space, and agent/tool pipelines.

Limitations

  • Position paper rather than a controlled model comparison.
  • Many LLM-for-time-series systems remain brittle, expensive, or weakly calibrated.
  • Natural-language interaction does not automatically make a model action-conditioned or causal.

Open Questions

  • Which time-series tasks need an LLM, and which only need better numeric encoders?
  • How can a time-series LLM preserve probabilistic calibration while following instructions?
  • Should language be used as a domain/context interface, a reasoning interface, or the primary modeling substrate?