Secure neural network to protect sensitive data
SPNN (Secure Private Neural Network) provides privacy and security for analysts training deep neural networks to perform inference on big data. These networks learn using training datasets that may contain sensitive data; adversaries can exploit these networks, causing data breaches or misclassification of sensitive information.
“SPNN uses privacy-preserving encryption so deep neural networks can securely perform training and classification tasks.”
Chief Software Engineer at Charles River Analytics and Project Manager on the SPNN effort
“Cyber adversaries can monitor deep neural networks and learn their training and classification processes,” said Curt Wu, Chief Software Engineer at Charles River Analytics and Project Manager on the SPNN effort. “SPNN uses privacy-preserving encryption so deep neural networks can securely perform training and classification tasks.”
SPNN produces a secure neural network that preserves the privacy of training and testing data against white-box attacks via end-to-end efficient encryption. Additional obfuscation defenses thwart black-box attacks by adversaries who gain unencrypted access to the deep neural network through subversion or misuse of the system to conduct chosen plaintext attacks.
SPNN adds to our growing portfolio of innovative, hardened deep learning applications. For example, our CAMEL approach supports dialogues between humans and artificial intelligence systems to increase trust in deep learning applications.
This material is based upon work supported by the Office of the Secretary of Defense (Acquisition, Technology, and Logistics). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Office of the Secretary of Defense (Acquisition, Technology, and Logistics).