Skip to main navigation Skip to search Skip to main content

PROPS: Progressively Private Self-alignment of Large Language Models

  • Noel Teku
  • , Fengwei Tian
  • , Payel Bhattacharjee
  • , Souradip Chakraborty
  • , Amrit Singh Bedi
  • , Ravi Tandon

Research output: Contribution to journalArticlepeer-review

Abstract

Alignment is a key step in developing Large Language Models (LLMs) using human feedback to ensure adherence to human values and societal norms. Dependence on human feedback raises privacy concerns about how much a labeler’s preferences may reveal about their personal values, beliefs, and personality traits. Existing approaches, such as Differentially Private SGD (DP-SGD), provide rigorous privacy guarantees by privatizing gradients during fine-tuning and alignment but can provide more privacy than necessary as human preferences are tied only to labels of (prompt, response) pairs and can degrade model utility. This work focuses on LLM alignment with preference-level privacy, which preserves the privacy of preference labels provided by humans. We propose PROPS (PROgressively Private Self-alignment), a multi-stage privacy preserving alignment framework where privately aligned models in previous stages can serve as labelers for supplementing training data in the subsequent stages of alignment. We present theoretical guarantees for PROPS as well as comprehensive validation using multiple models (Pythia and GPT) and datasets (AlpacaEval, Anthropic HH-RLHF, truthy-dpo-v0.1) to demonstrate the utility of PROPS over existing methods while still providing high privacy. For the same privacy budget, alignment via PROPS can achieve up to 3x higher win-rates compared to DP-SGD, and 2.5x higher win-rates compared to Randomized Response (RR) based alignment.

Original languageEnglish (US)
JournalTransactions on Machine Learning Research
Volume2025-December
StatePublished - 2025
Externally publishedYes

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'PROPS: Progressively Private Self-alignment of Large Language Models'. Together they form a unique fingerprint.

Cite this