Abstract
Machine learning of microstructure–property relationships from data is an emerging approach in computational materials science. Most existing machine learning efforts focus on the development of task-specific models for each microstructure–property relationship. We propose utilizing pre-trained foundational vision transformers for the extraction of task-agnostic microstructure features and subsequent light-weight machine learning of a microstructure-dependent property. We demonstrate our approach with pre-trained state-of-the-art vision transformers (CLIP, DINOv2, SAM) in two case studies on machine-learning: (i) elastic modulus of two-phase microstructures based on simulations data; and (ii) Vicker's hardness of Ni-base and Co-base superalloys based on experimental data published in literature. Our results show the potential of foundational vision transformers for robust microstructure representation and efficient machine learning of microstructure–property relationships without the need for expensive task-specific training or fine-tuning of bespoke deep learning models.
| Original language | English (US) |
|---|---|
| Article number | 121217 |
| Journal | Acta Materialia |
| Volume | 296 |
| DOIs | |
| State | Published - Sep 1 2025 |
Keywords
- Machine learning
- Microstructure representation
- Microstructure–property relationships
- Reduced-order models
ASJC Scopus subject areas
- Electronic, Optical and Magnetic Materials
- Ceramics and Composites
- Polymers and Plastics
- Metals and Alloys