Abstract
Obfuscating a dataset by adding random noises to protect the privacy of sensitive samples in the training dataset is crucial to prevent data leakage to untrusted parties when dataset sharing is essential. We conduct comprehensive experiments to investigate how the dataset obfuscation can affect the resultant model weights - in terms of the model accuracy, ℓ2-distance-based model distance, and level of data privacy - and discuss the potential applications with the proposed Privacy, Utility, and Distinguishability (PUD)-triangle diagram to visualize the requirement preferences. Our experiments are based on the popular MNIST and CIFAR-10 datasets under both independent and identically distributed (IID) and non-IID settings. Significant results include a tradeoff between the model accuracy and privacy level and a tradeoff between the model difference and privacy level. The results indicate broad application prospects for training outsourcing and guarding against attacks in federated learning both of which have been increasingly attractive in many areas, particularly learning in edge computing.
| Original language | English |
|---|---|
| Article number | 85 |
| Journal | ACM Transactions on Intelligent Systems and Technology |
| Volume | 14 |
| Issue number | 5 |
| DOIs | |
| State | Published - 30 Sep 2023 |
| Externally published | Yes |
Keywords
- Data obfuscation
- data leakage
- edge computing
- federated learning
- machine learning
- privacy
Fingerprint
Dive into the research topics of 'Obfuscating the Dataset: Impacts and Applications'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver