IMAGES formed by ground-based telescopes are marred by atmospheric 'seeing9. The plane wavefront from an unresolved star is distorted by continually changing turbulent fluctuations in the air's refractive index. Diffraction-limited performance can in principle be recovered through the methods of adaptive optics, in which the instantaneous wavefront shape is sensed and corrected in real-time by deformable optics that cancel the distortion1,2. The highest resolution will be achieved when this technique is applied to multiple-telescope arrays. For such arrays, the biggest errors caused by seeing at infrared wavelengths are the variations in pathlength and wavefront tilt between array elements. We show here that these errors can be derived by an artificial neural network, given only a pair of simultaneous in-focus and out-of-focus images of a reference star formed at the combined focus of all the array elements. We have optimized a neural network appropriate for 2.2-μm wavelength imaging at the Multiple Mirror Telescope in Arizona. Corrections made by moving the beam-combining mirrors will largely recover the diffraction-limited profile, with a resolution of 0.06 arcsec.
ASJC Scopus subject areas