TY - GEN
T1 - Decentralized Expectation Maximization Algorithm
AU - Jin, Honghe
AU - Sun, Xiaoxiao
AU - Xu, Liwen
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - As a promising paradigm that does not require a central node, decentralized computing provides better network load balance and has advantages over centralized computing in terms of data protection. Although decentralized algorithms such as decentralized gradient descent algorithms have been extensively studied, there is no such research on the expectation maximization (EM) algorithm, which includes the expectation step (E-step) and the maximization step (M-step) and is a popular iterative method for missing data studies and latent variable models. In this paper, we propose decentralized EM algorithms under different communication and network topology settings such as synchronous communication and dynamic networks. Convergence analysis of the proposed algorithms is provided in the synchronous scenario. Empirical studies show that our proposed algorithms have numerical advantages over the EM algorithms based on local data or full data only, especially when there is no closed-form maximization in the M-step.
AB - As a promising paradigm that does not require a central node, decentralized computing provides better network load balance and has advantages over centralized computing in terms of data protection. Although decentralized algorithms such as decentralized gradient descent algorithms have been extensively studied, there is no such research on the expectation maximization (EM) algorithm, which includes the expectation step (E-step) and the maximization step (M-step) and is a popular iterative method for missing data studies and latent variable models. In this paper, we propose decentralized EM algorithms under different communication and network topology settings such as synchronous communication and dynamic networks. Convergence analysis of the proposed algorithms is provided in the synchronous scenario. Empirical studies show that our proposed algorithms have numerical advantages over the EM algorithms based on local data or full data only, especially when there is no closed-form maximization in the M-step.
KW - Asynchronous optimization
KW - Decentralized computing
KW - Dynamic network
KW - EM algorithm
KW - Mixture model
UR - http://www.scopus.com/inward/record.url?scp=85092654749&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092654749&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60245-1_35
DO - 10.1007/978-3-030-60245-1_35
M3 - Conference contribution
AN - SCOPUS:85092654749
SN - 9783030602444
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 512
EP - 527
BT - Algorithms and Architectures for Parallel Processing - 20th International Conference, ICA3PP 2020, Proceedings
A2 - Qiu, Meikang
PB - Springer Science and Business Media Deutschland GmbH
T2 - 20th International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2020
Y2 - 2 October 2020 through 4 October 2020
ER -