At Steward Observatory we are developing an adaptive optics program for the Multiple Mirror Telescope (MMT) initially based in the near infrared. Using a neural network to recognize the wavefront aberrations in real time from a pair of in and out of focus images has proven itself a promising new method of wavefront sensing especially with the revolution of low noise, fast read out IR detectors. It takes a neural net on the order of 10,000 training image pairs to learn to recognize wavefront aberrations of a new, previously unseen image. Training begins with aberrated images created by the adaptive instrument itself, but since correction is over a region of approximately 2ro (Fried's parameter), the high spatial frequency components of real atmospheric turbulence are absent in these training images. We use computer simulated image pairs generated by atmospheric models based on Kolmogorov turbulence theory to further train the neural nets for the real conditions encountered when observing. Recently we have expanded our atmospheric modeling to include the stratification of turbulent layers. Using knife-edge and phase structure function measurements, we have begun to model temporal characteristics caused by atmospheric winds. The motivation for this modeling is to eventually train nets to separate the various turbulent layers allowing for multi-conjugate wavefront correction, a method which greatly extends the isoplanatic patch. Presented here are descriptions of our modeling techniques as well as results of our modeling including comparisons between stratified and single layer models.