Differential Privacy (DP) has become a gold-standard to preserve privacy in deep learning. Intuitively speaking, DP ensures that the output of a model is approximately invariant to the inclusion or exclusion of a single individual's data from the training set. There is, however, a trade-off between privacy and utility. DP models tend to perform worse than non-DP models trained on the same data. This is caused by the clipping of per-sample gradients and the addition of noise required for DP guarantees causing an obfuscation of the individual data point's contribution. In this work, we propose a method to reduce this discrepancy by improving the alignment between the per-sample gradients of each individual training sample with its non-DP gradient by increasing their cosine similarity. Optimizing the alignment in only a relevant subset of gradient dimensions, further improves the performance. We evaluate our method on CIFAR-10 and a pediatric pneumonia chest x-ray dataset.
«
Differential Privacy (DP) has become a gold-standard to preserve privacy in deep learning. Intuitively speaking, DP ensures that the output of a model is approximately invariant to the inclusion or exclusion of a single individual's data from the training set. There is, however, a trade-off between privacy and utility. DP models tend to perform worse than non-DP models trained on the same data. This is caused by the clipping of per-sample gradients and the addition of noise required for DP guara...
»