Object Representations for Learning and Reasoning

Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS)

December 11, 2020, Virtual Workshop

Join via the livestream 🎥 · RocketChat · @ORLR_Workshop · #ORLR2020 · Join our community Slack!

Word(s) and Object(s): Grounded Language Learning In Information Retrieval

  • Federico Bianchi, Jacopo Tagliabue, and Ciro Greco
  • PDF


We present a grounded language model for Information Retrieval, that learns lexical and compositional meaning for search queries from dense representations of objects; in our case, target entities are products, modeled as low-dimensional embeddings trained over behavioural data from a e-commerce website. Crucially, the proposed semantics exhibits compositional virtues but it is still fully learnable without explicit labelling: our domain of reference, denotation and composition are all learned from user data only. We benchmark the grounded model against SOTA intra-textual models (such as word2vec and BERT), and we show that it provides more accuracy and better generalizations.