Abstract
We introduce using images for word sense disambiguation, either alone, or in conjunction with traditional text based methods. The approach is based on a recently developed method for automatically annotating images by using a statistical model for the joint probability for image regions and words. The model itself is learned from a data base of images with associated text. To use the model for word sense disambiguation, we constrain the predicted words to be possible senses for the word under consideration. When word prediction is constrained to a narrow set of choices (such as possible senses), it can be quite reliable. We report on experiments using the resulting sense probabilities as is, as well as augmenting a state of the art text based word sense disambiguation algorithm. In order to evaluate our approach, we developed a new corpus, ImCor, which consists of a substantive portion of the Corel image data set associated with disambiguated text drawn from the SemCor corpus. Our experiments using this corpus suggest that visual information can be very useful in disambiguating word senses. It also illustrates that associated non-textual information such as image data can help ground language meaning.
Original language | English (US) |
---|---|
Pages (from-to) | 13-30 |
Number of pages | 18 |
Journal | Artificial Intelligence |
Volume | 167 |
Issue number | 1-2 |
DOIs | |
State | Published - Sep 2005 |
Keywords
- Image auto-annotation
- Region labeling
- Statistical models
- Word sense disambiguation
ASJC Scopus subject areas
- Language and Linguistics
- Linguistics and Language
- Artificial Intelligence