Skip to main navigation Skip to search Skip to main content

PAT: Pruning-Aware Tuning for Large Language Models

  • Yijiang Liu
  • , Huanrui Yang
  • , Youxin Chen
  • , Rongyu Zhang
  • , Miao Wang
  • , Yuan Du
  • , Li Du

Research output: Contribution to journalConference articlepeer-review

Abstract

Large language models (LLMs) excel in language tasks, especially with supervised fine-tuning after pre-training. However, their substantial memory and computational requirements hinder practical applications. Structural pruning, which reduces less significant weight dimensions, is one solution. Yet, traditional post-hoc pruning often leads to significant performance loss, with limited recovery from further fine-tuning due to reduced capacity. Since the model fine-tuning refines the general and chaotic knowledge in pre-trained models, we aim to incorporate structural pruning with the fine-tuning, and propose the Pruning-Aware Tuning (PAT) paradigm to eliminate model redundancy while preserving the model performance to the maximum extend. Specifically, we insert the innovative Hybrid Sparsification Modules (HSMs) between the Attention and FFN components to accordingly sparsify the upstream and downstream linear modules. The HSM comprises a lightweight operator and a globally shared trainable mask. The lightweight operator maintains a training overhead comparable to that of LoRA, while the trainable mask unifies the channels to be sparsified, ensuring structural pruning. Additionally, we propose the Identity Loss which decouples the transformation and scaling properties of the HSMs to enhance training robustness. Extensive experiments demonstrate that PAT excels in both performance and efficiency. For example, our Llama2-7b model with a 25% pruning ratio achieves 1.33× speedup while outperforming the LoRA-finetuned model by up to 1.26% in accuracy with a similar training cost.

Original languageEnglish (US)
Pages (from-to)24686-24695
Number of pages10
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number23
DOIs
StatePublished - Apr 11 2025
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: Feb 25 2025Mar 4 2025

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'PAT: Pruning-Aware Tuning for Large Language Models'. Together they form a unique fingerprint.

Cite this