D4RL: Datasets for Deep Data-Driven Reinforcement Learning

Source

Core Claim

D4RL packages offline RL trajectories as state-action-reward-next-state datasets across locomotion, navigation, dexterous manipulation, and kitchen tasks.

Action-Time-Series Notes

  • Treats time as episodic transition sequences rather than regularly sampled calendar time.
  • Action channel is explicit and is usually the environment control vector.
  • Useful as a clean low-dimensional starting point for action-conditioned dynamics and model-based offline RL.