Automated Video Segmentation for Lecture Videos: A Linguistics-Based Approach

Ming Lin, Michael Chau, Jinwei Cao, Jay F. Nunamaker

Research output: Contribution to journalArticlepeer-review

25 Scopus citations


Video, a rich information source, is commonly used for capturing and sharing knowledge in learning systems. However, the unstructured and linear features of video introduce difficulties for end users in accessing the knowledge captured in videos. To extract the knowledge structures hidden in a lengthy, multi-topic lecture video and thus make it easily accessible, we need to first segment the video into shorter clips by topic. Because of the high cost of manual segmentation, automated segmentation is highly desired. However, current automated video segmentation methods mainly rely on scene and shot change detection, which are not suitable for lecture videos with few scene/shot changes and unclear topic boundaries. In this article we investigate a new video segmentation approach with high performance on this special type of video: lecture videos. This approach uses natural language processing techniques such as noun phrases extraction, and utilizes lexical knowledge sources such as WordNet. Multiple linguistic-based segmentation features are used, including content-based features such as noun phrases and discourse-based features such as cue phrases. Our evaluation results indicate that the noun phrases feature is salient.

Original languageEnglish (US)
Pages (from-to)27-45
Number of pages19
JournalInternational Journal of Technology and Human Interaction (IJTHI)
Issue number2
StatePublished - Apr 2005


  • computational linguistics
  • lecture video
  • multimedia application
  • video segmentation
  • virtual learning

ASJC Scopus subject areas

  • Information Systems
  • Human-Computer Interaction


Dive into the research topics of 'Automated Video Segmentation for Lecture Videos: A Linguistics-Based Approach'. Together they form a unique fingerprint.

Cite this