MM Algorithms for Variance Components Models

Hua Zhou, Liuyi Hu, Jin Zhou, Kenneth Lange

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation (MLE) and restricted MLE of variance component models remain numerically challenging. Building on the minorization–maximization (MM) principle, this article presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush–Kuhn–Tucker point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite. Supplementary materials for this article are available online.

Original languageEnglish (US)
JournalJournal of Computational and Graphical Statistics
DOIs
StatePublished - Jan 1 2019

Fingerprint

Variance Component Model
Variance Components
Maximum Likelihood Estimation
Restricted Estimation
Multivariate Response
Maximum a Posteriori Estimation
Restricted Maximum Likelihood
Components of Variance
Linear Mixed Model
Large Data
Mixed Model
EM Algorithm
Model Analysis
Missing Data
Global Convergence
Positive definite
Iterative Algorithm
Covariance matrix
Trivial
Statistics

Keywords

  • Global convergence
  • Linear mixed model (LMM)
  • Matrix convexity
  • Maximum a posteriori (MAP) estimation
  • Minorization–maximization (MM)
  • Multivariate response
  • Penalized estimation
  • Variance components model

ASJC Scopus subject areas

  • Statistics and Probability
  • Discrete Mathematics and Combinatorics
  • Statistics, Probability and Uncertainty

Cite this

MM Algorithms for Variance Components Models. / Zhou, Hua; Hu, Liuyi; Zhou, Jin; Lange, Kenneth.

In: Journal of Computational and Graphical Statistics, 01.01.2019.

Research output: Contribution to journalArticle

@article{29ab436dcaaf4f13a1ba750e6234db27,
title = "MM Algorithms for Variance Components Models",
abstract = "Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation (MLE) and restricted MLE of variance component models remain numerically challenging. Building on the minorization–maximization (MM) principle, this article presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush–Kuhn–Tucker point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite. Supplementary materials for this article are available online.",
keywords = "Global convergence, Linear mixed model (LMM), Matrix convexity, Maximum a posteriori (MAP) estimation, Minorization–maximization (MM), Multivariate response, Penalized estimation, Variance components model",
author = "Hua Zhou and Liuyi Hu and Jin Zhou and Kenneth Lange",
year = "2019",
month = "1",
day = "1",
doi = "10.1080/10618600.2018.1529601",
language = "English (US)",
journal = "Journal of Computational and Graphical Statistics",
issn = "1061-8600",
publisher = "American Statistical Association",

}

TY - JOUR

T1 - MM Algorithms for Variance Components Models

AU - Zhou, Hua

AU - Hu, Liuyi

AU - Zhou, Jin

AU - Lange, Kenneth

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation (MLE) and restricted MLE of variance component models remain numerically challenging. Building on the minorization–maximization (MM) principle, this article presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush–Kuhn–Tucker point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite. Supplementary materials for this article are available online.

AB - Variance components estimation and mixed model analysis are central themes in statistics with applications in numerous scientific disciplines. Despite the best efforts of generations of statisticians and numerical analysts, maximum likelihood estimation (MLE) and restricted MLE of variance component models remain numerically challenging. Building on the minorization–maximization (MM) principle, this article presents a novel iterative algorithm for variance components estimation. Our MM algorithm is trivial to implement and competitive on large data problems. The algorithm readily extends to more complicated problems such as linear mixed models, multivariate response models possibly with missing data, maximum a posteriori estimation, and penalized estimation. We establish the global convergence of the MM algorithm to a Karush–Kuhn–Tucker point and demonstrate, both numerically and theoretically, that it converges faster than the classical EM algorithm when the number of variance components is greater than two and all covariance matrices are positive definite. Supplementary materials for this article are available online.

KW - Global convergence

KW - Linear mixed model (LMM)

KW - Matrix convexity

KW - Maximum a posteriori (MAP) estimation

KW - Minorization–maximization (MM)

KW - Multivariate response

KW - Penalized estimation

KW - Variance components model

UR - http://www.scopus.com/inward/record.url?scp=85062796199&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062796199&partnerID=8YFLogxK

U2 - 10.1080/10618600.2018.1529601

DO - 10.1080/10618600.2018.1529601

M3 - Article

JO - Journal of Computational and Graphical Statistics

JF - Journal of Computational and Graphical Statistics

SN - 1061-8600

ER -