ELLIS header
University of Stuttgart Logo
Max Planck Institute for Intelligent Systems Logo

A Multimodal LDA Model integrating Textual, Cognitive and Visual Modalities

Stephen Roller, Sabine Schulte im Walde

Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1146–1157, 2013.


Abstract

Recent investigations into grounded models of language have shown that holistic views of language and perception can provide higher performance than independent views. In this work, we improve a two-dimensional multimodal version of Latent Dirichlet Allocation (Andrews et al., 2009) in various ways. (1) We outperform text-only models in two different evaluations, and demonstrate that low-level visual features are directly compatible with the existing model. (2) We present a novel way to integrate visual features into the LDA model using unsupervised clusters of images. The clusters are directly interpretable and improve on our evaluation tasks. (3) We provide two novel ways to extend the bimodal models to support three or more modalities. We find that the three-, four-, and five-dimensional models significantly outperform models using only one or two modalities, and that nontextual modalities each provide separate, disjoint knowledge that cannot be forced into a shared, latent structure.

Links


BibTeX

@inproceedings{roller13_emnlp, title = {A Multimodal LDA Model integrating Textual, Cognitive and Visual Modalities}, author = {Roller, Stephen and {Schulte im Walde}, Sabine}, year = {2013}, booktitle = {Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, pages = {1146–1157} }