ICLR 2026 Time-Series Classification Meta-Analysis

Source

Core Claim

The local ICLR 2026 classification pass suggests a split field: after a strict time-series refresh, forecasting remains the largest explicit time-series topic in the aggregate counts, while representation-learning work is substantial but heavily concentrated in clinical, biosignal, EEG/ECG, neuro, and physiology-adjacent papers according to Alex’s hand-verified follow-up interpretation.

Key Facts

  • The strict refresh reclassified 509 items and left 169 remaining time-series rows.
  • Of those remaining rows, the aggregate topic counts were 63 forecasting, 59 representation learning, 5 reasoning, and 42 none.
  • For rows where time series was the main focus, the aggregate pivot recorded 62 forecasting, 56 representation learning, 5 reasoning, and 36 none.
  • Alex’s follow-up post reports a hand-verified interpretation that 32 out of 57 time-series representation-learning papers came from the EEG/ECG/neuro/physiology space.

Wiki Use

Use this source as a field-map source, not as a formal survey. It supports the claim that the broader time-series foundation-model conversation is still forecasting-heavy, and that a large visible part of representation-learning work is concentrated in biomedical and physiological signals.

This matters for Latent-State Time-Series Modeling: if latent-state representation learning is mostly appearing in physiology and neuro signals, the wiki should actively look for analogous representation objectives in observability, telecom, industrial control, robotics, energy, and other enterprise-scale systems rather than letting forecasting leaderboards define the whole agenda.

Limitations

  • The row-level aggregate files preserve 59 representation-learning rows overall and 56 main-focus representation-learning rows, not the final hand-verified 57-row denominator.
  • The public follow-up explicitly says the Codex-assisted batch analysis used non-exhaustive hand verification and may contain errors.
  • This is a local conference-program analysis, not a peer-reviewed bibliometric study.

Open Questions

  • Which non-biomedical domains have enough data and tasks to support time-series representation learning beyond forecasting?
  • Should future discovery passes tag representation-learning sources by domain so this concentration can be tracked explicitly?
  • How should conference-program field maps be validated before they become durable wiki claims?