Extension of the infrastructure supporting Network-centric operations to the Tactical Edge relies primarily on wireless communications. The nature of these communications ensures that the channel will vary dramatically due to environmental and propagation effects, as well as potential interferers. Tactical operations introduce additional impairment due to mobility. With convergence to Internet Protocol (IP) centric networking, voice communications at the tactical edge will be increasingly dependent on Voice over IP (VoIP). In a packet-based protocol such as IP, two critical factors affecting latency sensitive applications such as VoIP are end-to-end delay and packet loss. Making a tradeoff between these two factors is of prime consideration in designing the jitter buffer playout scheme. Since voice packets are delayed in the network randomly, the jitter buffer is required to maintain consistently spaced playout of voice samples. A deep buffer protects against packet loss due to late arrival of packets. However, a deep buffer introduces mouth-to-ear delay that ultimately degrades the perceived voice quality. We describe an algorithm for dynamically estimating network delay using time series models. This enables the VoIP application to manage the jitter buffer to maintain a minimum playout buffer, while keeping the packet loss rate above a minimal threshold to maintain consistent voice quality. Our proposed algorithm limits sensitivity to short-term delay jitter and is very reactive to bursty network traffic. Simulation results show an improvement of 11% to 15% using metrics based on the subjective ITU E-model (R-factor) when compared against currently used playout methods. The improvement gain from the proposed method may be of particular significance for the challenges of supporting bursty dynamic changing wireless communication channel.