# ICLR 2026 Time-Series Classification Meta-Analysis Snapshot

## Provenance

- Local workspace: `/home/ipse/work/iclr2026/`
- Primary aggregate file: `/home/ipse/work/iclr2026/iclr_item_classification_strict_ts_stats.md`
- Primary row file: `/home/ipse/work/iclr2026/iclr_item_classification_strict_ts.csv`
- Pivot file: `/home/ipse/work/iclr2026/iclr_item_classification_time_series_by_topic_pivot.csv`
- Kind breakdown file: `/home/ipse/work/iclr2026/iclr_item_classification_time_series_by_topic_kind_breakdown.csv`
- Personal route synthesis: `/home/ipse/work/iclr2026/iclr2026_personal_route.md`
- Snapshot date: 2026-05-16

## Strict Refresh Summary

The strict refresh source subset is recorded as `classification_runs/time_series_refresh_20260501`.

- Reclassified items: 509
- Changed rows after merge: 396
- Remaining time-series rows after strict refresh: 169

### Time-Series Category, Before To After

| Category | Before | After | Delta |
| --- | ---: | ---: | ---: |
| time series is the main focus | 194 | 159 | -35 |
| time series are one of the modalities | 315 | 10 | -305 |
| no time series | 5142 | 5482 | +340 |

### Topic Distribution Within Remaining Time-Series Rows

| Topic | Before | After | Delta |
| --- | ---: | ---: | ---: |
| forecasting | 109 | 63 | -46 |
| representation learning | 224 | 59 | -165 |
| reasoning | 55 | 5 | -50 |
| none | 121 | 42 | -79 |

### Main Pivot

| Time series | Forecasting | Representation learning | Reasoning | None | Total |
| --- | ---: | ---: | ---: | ---: | ---: |
| time series is the main focus | 62 | 56 | 5 | 36 | 159 |
| time series are one of the modalities | 1 | 3 | 0 | 6 | 10 |
| TOTAL | 63 | 59 | 5 | 42 | 169 |

## Alex Follow-Up Interpretation

The public follow-up post on 2026-05-01 states that a Codex-assisted, non-exhaustively hand-verified meta-analysis found that 32 out of 57 time-series representation-learning papers were from the EEG/ECG/neuro/physiology space, while the rest of time series leaned more toward forecasting.

The local aggregate files preserve the strict-refresh counts as 59 representation-learning rows overall, with 56 rows where time series is the main focus. They do not preserve the hand-verification field that produced the public `32 out of 57` denominator. For wiki use, treat `32/57` as Alex's hand-verified interpretation of the local analysis, and treat the aggregate tables above as the machine-readable audit trail.

## Personal Route Synthesis

The local `iclr2026_personal_route.md` frames the conference route around:

- latent-space predictive learning, JEPA, next-embedding, and world models;
- multimodal time-series reasoning;
- synthetic pretraining, controllable evaluation, and scaling laws;
- learned chunking, tokenization, patching, and SSMs;
- unified multimodal pretraining beyond language.

It explicitly recommends collecting the narrative as `from forecasting to temporal world models`, and filtering out forecasting papers that only improve metrics without answering representation-quality questions.

