TY - JOUR

T1 - Computing accurate probabilistic estimates of one‐d entropy from equiprobable random samples

AU - Gupta, Hoshin V.

AU - Ehsani, Mohammad Reza

AU - Roy, Tirthankar

AU - Sans‐fuentes, Maria A.

AU - Ehret, Uwe

AU - Behrangi, Ali

N1 - Funding Information:
Acknowledgments: We are grateful to the two anonymous reviewers who provided comments and suggestions that helped to improve this paper, and especially to reviewer #2 who insisted that we provide a comparison with the Parzen‐window kernel density method. We also acknowledge the many members of the GeoInfoTheory community (https://geoinfotheory.org accessed on 1 March 2021) who have provided both moral support and extensive discussion of matters related to Infor‐ mation Theory and its applications to the Geosciences; without their existence and enthusiastic en‐ gagement it is unlikely that the ideas leading to this manuscript would have occurred to us. The first author acknowledges partial support by the Australian Research Council Centre of Excellence for Climate Extremes (CE170100023). The second and sixth authors accnowledge partial support by the University of Arizona Earth Dynamics Observatory, funded by the Office of Research, Innovation and Impact. The QS and BC algorithms used in this work are freely accessible for non‐commercial use at https://github.com/rehsani/Entropy accessed on 20 February 2021. The KD algorithm used in this work is downloadable from https://webee.technion.ac.il/~yoav/research/blind‐separation.html accessed on 13 May 2021.
Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

PY - 2021/6

Y1 - 2021/6

N2 - We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one‐dimensional entropy from equiprobable random samples, and compare it with the popular Bin‐Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal‐width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal‐probability‐mass intervals. And, whereas BC and KD each require optimal tuning of a hyper‐parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25– 0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper‐parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel‐type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log‐Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as-10% and for BC as large as -50%. We speculate that estimating quantile locations, rather than bin‐probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.

AB - We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one‐dimensional entropy from equiprobable random samples, and compare it with the popular Bin‐Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal‐width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal‐probability‐mass intervals. And, whereas BC and KD each require optimal tuning of a hyper‐parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25– 0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper‐parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel‐type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log‐Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as-10% and for BC as large as -50%. We speculate that estimating quantile locations, rather than bin‐probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.

KW - Accuracy

KW - Bootstrap

KW - Entropy

KW - Estimation

KW - Quantile spacing

KW - Small‐sample efficiency

KW - Uncertainty

UR - http://www.scopus.com/inward/record.url?scp=85108380597&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85108380597&partnerID=8YFLogxK

U2 - 10.3390/e23060740

DO - 10.3390/e23060740

M3 - Article

AN - SCOPUS:85108380597

VL - 23

JO - Entropy

JF - Entropy

SN - 1099-4300

IS - 6

M1 - 740

ER -