TY - GEN
T1 - Minimization of continuous bethe approximations
T2 - 26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012
AU - Pacheco, Jason L.
AU - Sudderth, Erik B.
PY - 2012
Y1 - 2012
N2 - We develop convergent minimization algorithms for Bethe variational approximations which explicitly constrain marginal estimates to families of valid distributions. While existing message passing algorithms define fixed point iterations corresponding to stationary points of the Bethe free energy, their greedy dynamics do not distinguish between local minima and maxima, and can fail to converge. For continuous estimation problems, this instability is linked to the creation of invalid marginal estimates, such as Gaussians with negative variance. Conversely, our approach leverages multiplier methods with well-understood convergence properties, and uses bound projection methods to ensure that marginal approximations are valid at all iterations. We derive general algorithms for discrete and Gaussian pairwise Markov random fields, showing improvements over standard loopy belief propagation. We also apply our method to a hybrid model with both discrete and continuous variables, showing improvements over expectation propagation.
AB - We develop convergent minimization algorithms for Bethe variational approximations which explicitly constrain marginal estimates to families of valid distributions. While existing message passing algorithms define fixed point iterations corresponding to stationary points of the Bethe free energy, their greedy dynamics do not distinguish between local minima and maxima, and can fail to converge. For continuous estimation problems, this instability is linked to the creation of invalid marginal estimates, such as Gaussians with negative variance. Conversely, our approach leverages multiplier methods with well-understood convergence properties, and uses bound projection methods to ensure that marginal approximations are valid at all iterations. We derive general algorithms for discrete and Gaussian pairwise Markov random fields, showing improvements over standard loopy belief propagation. We also apply our method to a hybrid model with both discrete and continuous variables, showing improvements over expectation propagation.
UR - http://www.scopus.com/inward/record.url?scp=84877785656&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84877785656&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84877785656
SN - 9781627480031
T3 - Advances in Neural Information Processing Systems
SP - 2564
EP - 2572
BT - Advances in Neural Information Processing Systems 25
Y2 - 3 December 2012 through 6 December 2012
ER -