Extracting latent attributes from video scenes using text as background knowledge

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We explore the novel task of identifying latent attributes in video scenes, such as the mental states of actors, using only large text collections as background knowledge and minimal information about the videos, such as activity and actor types. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms. We develop and test several largely unsupervised information extraction models that identify the mental states of human participants in video scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models as well as other baseline methods.

Original languageEnglish (US)
Title of host publicationProceedings of the 3rd Joint Conference on Lexical and Computational Semantics, *SEM 2014
PublisherAssociation for Computational Linguistics (ACL)
Pages121-131
Number of pages11
ISBN (Electronic)9781941643259
StatePublished - Jan 1 2014
Event3rd Joint Conference on Lexical and Computational Semantics, *SEM 2014 - Dublin, Ireland
Duration: Aug 23 2014Aug 24 2014

Other

Other3rd Joint Conference on Lexical and Computational Semantics, *SEM 2014
CountryIreland
CityDublin
Period8/23/148/24/14

ASJC Scopus subject areas

  • Computer Science Applications
  • Information Systems
  • Computer Networks and Communications

Fingerprint Dive into the research topics of 'Extracting latent attributes from video scenes using text as background knowledge'. Together they form a unique fingerprint.

  • Cite this

    Tran, A., Surdeanu, M., & Cohen, P. R. (2014). Extracting latent attributes from video scenes using text as background knowledge. In Proceedings of the 3rd Joint Conference on Lexical and Computational Semantics, *SEM 2014 (pp. 121-131). Association for Computational Linguistics (ACL).