Machine Learning (ML) algorithms have been widely used in many critical applications, including Dynamic Data Driven Applications Systems (DDDAS) applications, automated financial trading applications, autonomous vehicles, and intrusion detection systems for the decision-making process of users or automated systems. However, malicious adversaries have strong interests in manipulation the operations of machine learning algorithms to achieve their objectives in gaining financially, injecting injury or disasters. Adversaries against ML can be classified based on their capabilities and goals into two types: Adversary who has full knowledge of the ML models and parameters (white-box scenario) and one that does not have any knowledge and use guessing techniques to figure out the ML model and its parameters (black-box scenario). In both scenarios, the adversaries will attempt to maliciously manipulate model either during training or testing. Defending against these attacks can be successful by following three methods: 1) making the ML model robust to adversary, 2) validating and verifying input, or 3) changing ML architecture. In this paper, we present a resilient machine learning (rML) ensemble against adversarial attacks by dynamically changing the ML architecture and the ML models to be used such that the adversaries have no knowledge about the current ML model being used and consequently stop their attempt to manipulate the ML operations at testing phase. We evaluate the effectiveness of our rML ensemble using the benchmarking, zero-query dataset “DAmageNet” that contains both clean and adversarial image samples. We use three main neural networks in our ensemble that includes VGG16, ResNet-50, and ResNet-101. The experimental results show that our rML can tolerate the adversarial samples and achieve high classification accuracy with small execution time degradation.