Neural networks are vulnerable to adversarial attacks formed by minuscule perturbations to the original data. These perturbations lead to significant performance degradation. Previous works on defenses against adversarial evasion attacks typically involve pre-processing input data at training or testing time, or modifications to the objective function optimized during the training. In contrast, relatively fewer defense methods focus on modifying the topology and functionality of the underlying defended neural network. Additionally, prior theoretical examinations of the geometry of adversarial examples reveal a challenging and intrinsic trade-off between adversarial and benign accuracy. We introduce a novel modification to a traditional feed-forward convolutional neural network that embeds uncertainty within the network's hidden representations in a learned and data-dependent manner. Our proposed alteration renders the network significantly more resilient than comparably computationally expensive alternatives. Further, the empirical investigation of the proposed defense demonstrates that, unlike prior defense techniques that are comparable to state-of-the-art, the stochastic resonance effect improves adversarial accuracy without significant degradation in benign accuracy.