### Abstract

We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the random walk Metropolis-Hastings algorithm with a "smart" variant that incorporates gradient information into the trial moves, a generalization of the Metropolis adjusted Langevin algorithm. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales.

Original language | English (US) |
---|---|

Article number | 046704 |

Journal | Physical Review E - Statistical, Nonlinear, and Soft Matter Physics |

Volume | 78 |

Issue number | 4 |

DOIs | |

State | Published - Oct 20 2008 |

Externally published | Yes |

### Fingerprint

### ASJC Scopus subject areas

- Condensed Matter Physics
- Statistical and Nonlinear Physics
- Statistics and Probability

### Cite this

*Physical Review E - Statistical, Nonlinear, and Soft Matter Physics*,

*78*(4), [046704]. https://doi.org/10.1103/PhysRevE.78.046704

**Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms.** / Casey, Fergal P.; Waterfall, Joshua J.; Gutenkunst, Ryan N; Myers, Christopher R.; Sethna, James P.

Research output: Contribution to journal › Article

*Physical Review E - Statistical, Nonlinear, and Soft Matter Physics*, vol. 78, no. 4, 046704. https://doi.org/10.1103/PhysRevE.78.046704

}

TY - JOUR

T1 - Variational method for estimating the rate of convergence of Markov-chain Monte Carlo algorithms

AU - Casey, Fergal P.

AU - Waterfall, Joshua J.

AU - Gutenkunst, Ryan N

AU - Myers, Christopher R.

AU - Sethna, James P.

PY - 2008/10/20

Y1 - 2008/10/20

N2 - We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the random walk Metropolis-Hastings algorithm with a "smart" variant that incorporates gradient information into the trial moves, a generalization of the Metropolis adjusted Langevin algorithm. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales.

AB - We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the random walk Metropolis-Hastings algorithm with a "smart" variant that incorporates gradient information into the trial moves, a generalization of the Metropolis adjusted Langevin algorithm. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales.

UR - http://www.scopus.com/inward/record.url?scp=55149125844&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=55149125844&partnerID=8YFLogxK

U2 - 10.1103/PhysRevE.78.046704

DO - 10.1103/PhysRevE.78.046704

M3 - Article

AN - SCOPUS:55149125844

VL - 78

JO - Physical review. E

JF - Physical review. E

SN - 2470-0045

IS - 4

M1 - 046704

ER -