Deep Learning for Camera Autofocus

Chengyu Wang, Qian Huang, Ming Cheng, Zhan Ma, David J. Brady

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

Most digital cameras use specialized autofocus sensors, such as phase detection, lidar or ultrasound, to directly measure focus state. However, such sensors increase cost and complexity without directly optimizing final image quality. This paper proposes a new pipeline for image-based autofocus and shows that neural image analysis finds focus 5-10x faster than traditional contrast enhancement. We achieve this by learning the direct mapping between an image and its focus position. In further contrast with conventional methods, AI methods can generate scene-based focus trajectories that optimize synthesized image quality for dynamic and three dimensional scenes. We propose a focus control strategy that varies focal position dynamically to maximize image quality as estimated from the focal stack. We propose a rule-based agent and a learned agent for different scenarios and show their advantages over other focus stacking methods.

Original languageEnglish (US)
Article number9354951
Pages (from-to)258-271
Number of pages14
JournalIEEE Transactions on Computational Imaging
Volume7
DOIs
StatePublished - 2021

Keywords

  • All-in-focus imaging
  • autofocus
  • computational photography
  • deep learning

ASJC Scopus subject areas

  • Signal Processing
  • Computer Science Applications
  • Computational Mathematics

Fingerprint

Dive into the research topics of 'Deep Learning for Camera Autofocus'. Together they form a unique fingerprint.

Cite this