Time Series Forecasting Using Manifold Learning

Source

Core Claim

High-dimensional time-series forecasting can be decomposed into embed, predict, and lift: embed the observed series onto a lower-dimensional manifold, forecast dynamics in that latent space, then lift predictions back to the original observation space.

Key Contributions

  • Uses manifold learning methods such as Locally Linear Embedding and Diffusion Maps to create low-dimensional embeddings.
  • Forecasts the embedded trajectory with models such as multivariate autoregression and Gaussian process regression.
  • Lifts forecasts back to the original space with radial basis function interpolation or Geometric Harmonics.
  • Tests the framework on synthetic stochastic time series and a real FOREX forecasting setup.

Method Notes

This is a classical latent-space forecasting method rather than a neural TSFM. Its connection to current work is conceptual: it separates representation learning, latent transition modeling, and observation reconstruction.

The method remains passive. The latent dynamics are not conditioned on actions, control inputs, or interventions.

Evidence And Results

The paper reports that the embed-predict-lift pipeline can recover strong forecasts on synthetic data and outperform direct full-space or PCA-based conventional schemes in the FOREX rolling-window experiments.

Alex Notes

  • Alex flagged this as “normal / none” and worth reading more carefully.
  • Possible research link: the embed-predict-lift decomposition may be close in spirit to NEPA or latent-space predictive learning, but this should remain a hypothesis until the paper is read in detail.
  • User note: FOREX forecasting uses manifold learning to generate embeddings, forecasts in embedding space, and lifts back to original space.

Limitations

  • Manifold smoothness, stationarity, and sampling-density assumptions can fail in real financial or operational systems.
  • No large-scale pretraining, no zero-shot transfer claim, and no language/context interface.
  • The method does not by itself solve how to learn stable latent spaces under interventions or distribution shift.

Open Questions

  • Can embed-predict-lift be revived with modern learned encoders and predictive latent objectives?
  • What tests would distinguish useful low-dimensional dynamics from spurious manifold structure?
  • Could this become a non-neural baseline for action-conditioned world-model latent spaces?