Object Representations for Learning and Reasoning

Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS)

December 11, 2020, Virtual Workshop

Join via the livestream 🎥 · RocketChat · @ORLR_Workshop · #ORLR2020 · Join our community Slack!

Deep Affordance Foresight: Planning for What Can Be Done Next

  • Danfei Xu, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, and Li Fei-Fei
  • (Oral)
  • PDF


Robotic planning in realistic environments requires searching in large planning spaces. A powerful concept for guiding the search is affordance, which models what actions can be successful in a given situation. However, the classical notion of affordance is unsuitable for planning because it only informs the robot about the immediate outcome of actions instead of what actions are best for achieving a long-term goal. In this paper, we introduce a new affordance representation and a learning-to-plan framework that enable the robot to reason about the long-term effects of actions through modeling what actions are possible in the future. We show that our method, Deep Affordance Foresight, can effectively learn multi-step tool-use tasks and quickly adapt to a new longer horizon task.