In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve (equation presented) where d is the dimension of the features, K is the time horizon, and σk2 is the noise variance at time step k, and Õ ignores polylogarithmic dependence, which is a factor of d3 improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in [0, 1], we achieve a horizon-free regret bound of Õ(d√K + d2) where d is the number of base models and K is the number of episodes. This is a factor of d3.5 improvement in the leading term and d7 in the lower order term. Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential 'count' lemma.