TY - GEN
T1 - Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs
AU - Kim, Yeoneung
AU - Yang, Insoon
AU - Jun, Kwang Sung
N1 - Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022
Y1 - 2022
N2 - In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve (equation presented) where d is the dimension of the features, K is the time horizon, and σk2 is the noise variance at time step k, and Õ ignores polylogarithmic dependence, which is a factor of d3 improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in [0, 1], we achieve a horizon-free regret bound of Õ(d√K + d2) where d is the number of base models and K is the number of episodes. This is a factor of d3.5 improvement in the leading term and d7 in the lower order term. Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential 'count' lemma.
AB - In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve (equation presented) where d is the dimension of the features, K is the time horizon, and σk2 is the noise variance at time step k, and Õ ignores polylogarithmic dependence, which is a factor of d3 improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in [0, 1], we achieve a horizon-free regret bound of Õ(d√K + d2) where d is the number of base models and K is the number of episodes. This is a factor of d3.5 improvement in the leading term and d7 in the lower order term. Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential 'count' lemma.
UR - http://www.scopus.com/inward/record.url?scp=85162131120&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85162131120&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85162131120
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -