Abstract
Data privacy is a fundamental challenge for Deep Learning (DL) in many applications. In this work, we propose SecureTrain, which aims to carry out privacy-preserved DL model training efficiently and without accuracy loss. SecureTrain enables joint linear and non-linear computation based on the Homomorphic Secret Share (HSS) technique,to carry out approximation-free non-polynomial operations, to achieve training stability and prevent accuracy loss. Meanwhile, it eliminates the time consuming Homomorphic permutation operation (Perm) and features an efficient piggyback design,by carefully devising the share set and exploiting the dataflow of the whole training process. This design significantly reduces the overall system training time. We analyze the computation and communication complexity of SecureTrain and prove its security. We implement SecureTrain and benchmark its performance with well-known dataset for both inference and training. For inference, SecureTrain not only ensures privacy-preserved inference, but achieves an inference speedup as high as 48× compared with state-of-the-art inference frameworks. For training, SecureTrain maintains the model accuracy and stability comparable to plaintext training, which is a sharp contrast to other schemes. To the best of knowledge, this is the first work that addresses two fundamental challenges, accuracy loss/training instability, and computation efficiency, in privacy-preserved deep neural network training.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 187-202 |
| Number of pages | 16 |
| Journal | IEEE Transactions on Network Science and Engineering |
| Volume | 9 |
| Issue number | 1 |
| DOIs | |
| State | Published - 2022 |
| Externally published | Yes |
Keywords
- Homomorphic encryption
- Neural network training
- Privacy preserving
- Secret share
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications
- Computer Networks and Communications
Fingerprint
Dive into the research topics of 'SecureTrain: An Approximation-Free and Computationally Efficient Framework for Privacy-Preserved Neural Network Training'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS