Abstract
This paper considers the problem of designing time-dependent real-time control policies for controllable nonlinear diffusion processes, with the goal of obtaining maximally informative observations about parameters of interest. More precisely, we maximize the expected Fisher information for the parameter obtained over the duration of the experiment, conditional on observations made up to that time. We propose to accomplish this with a two-step strategy: when the full state vector of the diffusion process is observable continuously, we formulate this as an optimal control problem and apply numerical techniques from stochastic optimal control to solve it. When observations are incomplete, infrequent, or noisy, we propose using standard filtering techniques to first estimate the state of the system and then apply the optimal control policy using the posterior expectation of the state. We assess the effectiveness of these methods in three situations: a paradigmatic bistable model from statistical physics, a model of action potential generation in neurons, and a model of a simple ecological system.
Original language | English (US) |
---|---|
Pages (from-to) | 234-264 |
Number of pages | 31 |
Journal | SIAM-ASA Journal on Uncertainty Quantification |
Volume | 3 |
Issue number | 1 |
DOIs | |
State | Published - 2015 |
Keywords
- Controlled diffusions
- Design of experiments
- Neuron dynamics
- Stochastic optimal control
- Stochastic population dynamics
ASJC Scopus subject areas
- Statistics and Probability
- Modeling and Simulation
- Statistics, Probability and Uncertainty
- Discrete Mathematics and Combinatorics
- Applied Mathematics