Machine learning of microstructure–property relationships in materials leveraging microstructure representation from foundational vision transformers

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Machine learning of microstructure–property relationships from data is an emerging approach in computational materials science. Most existing machine learning efforts focus on the development of task-specific models for each microstructure–property relationship. We propose utilizing pre-trained foundational vision transformers for the extraction of task-agnostic microstructure features and subsequent light-weight machine learning of a microstructure-dependent property. We demonstrate our approach with pre-trained state-of-the-art vision transformers (CLIP, DINOv2, SAM) in two case studies on machine-learning: (i) elastic modulus of two-phase microstructures based on simulations data; and (ii) Vicker's hardness of Ni-base and Co-base superalloys based on experimental data published in literature. Our results show the potential of foundational vision transformers for robust microstructure representation and efficient machine learning of microstructure–property relationships without the need for expensive task-specific training or fine-tuning of bespoke deep learning models.

Original languageEnglish (US)
Article number121217
JournalActa Materialia
Volume296
DOIs
StatePublished - Sep 1 2025

Keywords

  • Machine learning
  • Microstructure representation
  • Microstructure–property relationships
  • Reduced-order models

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Ceramics and Composites
  • Polymers and Plastics
  • Metals and Alloys

Fingerprint

Dive into the research topics of 'Machine learning of microstructure–property relationships in materials leveraging microstructure representation from foundational vision transformers'. Together they form a unique fingerprint.

Cite this