Object Representations for Learning and Reasoning
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS)
December 11, 2020, Virtual Workshop
Join via the livestream 🎥 · RocketChat · @ORLR_Workshop · #ORLR2020 · Join our community Slack!
Semantic State Representation for Reinforcement Learning
- Erez Schwartz, Guy Tennenholtz, Chen Tessler, and Shie Mannor
Abstract
Recent advances in reinforcement learning have shown its potential to tackle complex real-life tasks. However, as the task's dimensionality increases, reinforcement learning methods tend to struggle. To overcome this, we explore methods for representing the semantic information embedded in the state. While previous methods focused on information in its raw form (e.g., raw visual input), we propose representing the state as natural language. Language can represent complex scenarios and concepts, making it a favorable candidate for representation. Empirical evidence, within the domain of ViZDoom, suggests that natural language based agents are more robust, converge faster and perform better than vision based agents, showing the benefit of using natural language representations for reinforcement learning.