Secure, Assured, Intelligent Learning Systems (SAILS)
The Secure, Assured, Intelligent Learning Systems (SAILS) program aims to establish and explore a science of security for privacy vulnerabilities in artificial intelligence (AI) systems. Recent research has demonstrated the vulnerability of AI systems to exploits such as reconstructing training data using only output predictions, revealing statistical distribution information of training datasets, and performing membership queries for a specific training data example. This class of vulnerabilities is known as attacks against privacy due to the potential for revelation of personally identifying information from a trained model. Given these weaknesses, the goal of SAILS is to develop defensive measures to protect sensitive training data and statistical information contained within AI models. SAILS will explore a range of vulnerabilities across multiple domains, such as speech, image, and text.
- AI security
- Model inversion
- Membership inference
- Machine learning theory