TY - GEN
T1 - Towards Memory-Efficient and Sustainable Machine Unlearning on Edge using Zeroth-Order Optimizer
AU - Zhang, Ci
AU - Yang, Chence
AU - Tan, Qitao
AU - Liu, Jun
AU - Li, Ao
AU - Wang, Yanzhi
AU - Lu, Jin
AU - Wang, Jinhui
AU - Yuan, Geng
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/6/29
Y1 - 2025/6/29
N2 - Under increasing regulatory demands for data privacy, machine unlearning has emerged as a critical and effective technique for removing the influence of specific data points from a trained model. Although retraining from scratch typically yields the most ideal unlearning effect, it incurs prohibitive computational cost and resource waste. As a result, gradient ascent (GA)-based MU methods have become popular due to their efficiency. However, the reliance of GA on backpropagation leads to substantial memory overhead, rendering such methods impractical on memory-constrained devices such as mobile or edge platforms. In this paper, we propose a zeroth-order (ZO) alternative to conventional first-order-based backpropagation for performing GA-based unlearning. By eliminating the need for gradient computation, our approach significantly reduces memory consumption while maintaining the effectiveness of unlearning. Experiments demonstrate that ZO-GA achieves competitive MU performance, and notably exhibits greater training stability compared to conventional GA methods after unlearning is complete. Additionally, several intriguing observations are discussed, which may provide valuable insights for future research.
AB - Under increasing regulatory demands for data privacy, machine unlearning has emerged as a critical and effective technique for removing the influence of specific data points from a trained model. Although retraining from scratch typically yields the most ideal unlearning effect, it incurs prohibitive computational cost and resource waste. As a result, gradient ascent (GA)-based MU methods have become popular due to their efficiency. However, the reliance of GA on backpropagation leads to substantial memory overhead, rendering such methods impractical on memory-constrained devices such as mobile or edge platforms. In this paper, we propose a zeroth-order (ZO) alternative to conventional first-order-based backpropagation for performing GA-based unlearning. By eliminating the need for gradient computation, our approach significantly reduces memory consumption while maintaining the effectiveness of unlearning. Experiments demonstrate that ZO-GA achieves competitive MU performance, and notably exhibits greater training stability compared to conventional GA methods after unlearning is complete. Additionally, several intriguing observations are discussed, which may provide valuable insights for future research.
KW - Deep Learning
KW - Edge Computing
KW - Machine Unlearning
KW - Sustainable AI
KW - Zeroth-order Optimization.
UR - https://www.scopus.com/pages/publications/105017564039
UR - https://www.scopus.com/pages/publications/105017564039#tab=citedBy
U2 - 10.1145/3716368.3735273
DO - 10.1145/3716368.3735273
M3 - Conference contribution
AN - SCOPUS:105017564039
T3 - Proceedings of the ACM Great Lakes Symposium on VLSI, GLSVLSI
SP - 227
EP - 232
BT - GLSVLSI 2025 - Proceedings of the Great Lakes Symposium on VLSI 2025
PB - Association for Computing Machinery
T2 - 35th Edition of the Great Lakes Symposium on VLSI 2025, GLSVLSI 2025
Y2 - 30 June 2025 through 2 July 2025
ER -