T-Rep: Representation Learning for Time Series using Time-Embeddings
Source
- Raw Markdown: paper_t-rep-2023.md
- PDF: paper_t-rep-2023.pdf
- Preprint: arXiv 2310.04486
- Official code: Let-it-Care/T-Rep
Core Claim
Time-series SSL should learn timestep-granularity representations and explicitly learn embeddings of time, because trend, periodicity, distribution shifts, and missingness patterns are part of the temporal signal.
Key Contributions
- Introduces T-Rep, a self-supervised time-series representation method with learned time-embeddings.
- Concatenates learned time-embeddings with projected signal values before a dilated convolutional encoder.
- Uses pretext tasks that exploit time-embeddings to capture fine-grained temporal dependencies.
- Evaluates on classification, forecasting, anomaly detection, and missing-data robustness.
- Provides latent-space visualizations to inspect interpretability of learned representations.
Method Notes
T-Rep is a passive representation model. It encodes observed time series for downstream tasks and does not expose actions, control inputs, interventions, or counterfactual rollout channels.
The important mechanism is that time is not only a positional code. The learned time-embedding is trained as a signal-bearing representation that can carry trend, periodicity, and shift information even when observations are missing.
Evidence And Results
The paper reports gains over existing SSL time-series baselines across classification, forecasting, and anomaly detection, plus strong resilience under missing-data regimes. It is especially relevant beside TS2Vec: both care about timestep-level representations, but T-Rep makes the time embedding itself part of the learned representation contract.
Alex Notes
- Alex flagged this as normal / none and described it as a self-supervised method for timestep-granularity representations.
- Key thing to read later: whether learned time-embeddings are just positional features or whether they provide a reusable temporal state for downstream tasks.
Limitations
- Not a broad pretrained TSFM with released general-purpose zero-shot weights.
- Uses classical downstream heads and benchmark suites; foundation-model-style transfer remains a separate question.
- Learned time-embeddings may be dataset-specific unless tested across domains and temporal granularities.
Links Into The Wiki
- Self-Supervised Representation Learning
- Time-Series Classification Foundation Models
- Time-Series Foundation Models
- TS2Vec
Open Questions
- Can T-Rep-style learned time-embeddings scale to large heterogeneous TSFM pretraining?
- How do time-embeddings interact with patching, irregular sampling, and missingness?
- Could time-embeddings help latent predictive models avoid slow-feature shortcuts?