Fast approximate score computation on large-scale distributed data for learning multinomial Bayesian networks

Anas Katib, Praveen Rao, Jacobus J Barnard, Charles Kamhoua

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning.While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques.We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption.We also discuss how our approach is capable of efficiently recomputing scores when new data are available.We conducted a comprehensive evaluation of our approach and compared with the MapReducestyle computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.

Original languageEnglish (US)
Article number14
JournalACM Transactions on Knowledge Discovery from Data
Volume13
Issue number2
DOIs
StatePublished - Mar 1 2019
Externally publishedYes

Fingerprint

Bayesian networks
Statistics
Markov processes
Bandwidth
Data storage equipment

Keywords

  • Approximate score computation
  • Bayesian networks
  • Distributed data
  • Gossip algorithms
  • Structure learning

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Fast approximate score computation on large-scale distributed data for learning multinomial Bayesian networks. / Katib, Anas; Rao, Praveen; Barnard, Jacobus J; Kamhoua, Charles.

In: ACM Transactions on Knowledge Discovery from Data, Vol. 13, No. 2, 14, 01.03.2019.

Research output: Contribution to journalArticle

@article{94fa55e796b74c51a7f194a2d804e9ca,
title = "Fast approximate score computation on large-scale distributed data for learning multinomial Bayesian networks",
abstract = "In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning.While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques.We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption.We also discuss how our approach is capable of efficiently recomputing scores when new data are available.We conducted a comprehensive evaluation of our approach and compared with the MapReducestyle computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6{\%} average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.",
keywords = "Approximate score computation, Bayesian networks, Distributed data, Gossip algorithms, Structure learning",
author = "Anas Katib and Praveen Rao and Barnard, {Jacobus J} and Charles Kamhoua",
year = "2019",
month = "3",
day = "1",
doi = "10.1145/3301304",
language = "English (US)",
volume = "13",
journal = "ACM Transactions on Knowledge Discovery from Data",
issn = "1556-4681",
publisher = "Association for Computing Machinery (ACM)",
number = "2",

}

TY - JOUR

T1 - Fast approximate score computation on large-scale distributed data for learning multinomial Bayesian networks

AU - Katib, Anas

AU - Rao, Praveen

AU - Barnard, Jacobus J

AU - Kamhoua, Charles

PY - 2019/3/1

Y1 - 2019/3/1

N2 - In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning.While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques.We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption.We also discuss how our approach is capable of efficiently recomputing scores when new data are available.We conducted a comprehensive evaluation of our approach and compared with the MapReducestyle computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.

AB - In this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning.While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques.We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption.We also discuss how our approach is capable of efficiently recomputing scores when new data are available.We conducted a comprehensive evaluation of our approach and compared with the MapReducestyle computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.

KW - Approximate score computation

KW - Bayesian networks

KW - Distributed data

KW - Gossip algorithms

KW - Structure learning

UR - http://www.scopus.com/inward/record.url?scp=85063238795&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063238795&partnerID=8YFLogxK

U2 - 10.1145/3301304

DO - 10.1145/3301304

M3 - Article

AN - SCOPUS:85063238795

VL - 13

JO - ACM Transactions on Knowledge Discovery from Data

JF - ACM Transactions on Knowledge Discovery from Data

SN - 1556-4681

IS - 2

M1 - 14

ER -