
Updated 11 April 2023
The University of Surrey’s Cybersecurity Department has developed software that can determine the amount of information an AI system has gathered from an organization’s database. The software can also identify potential flaws in software code that could be exploited for malicious purposes. The Surrey researchers aim to incorporate the software into a company’s online security protocol, enabling the business to determine whether AI can access sensitive data. The verification software was recently awarded the best paper at the 25th International Symposium on Formal Methods.
With the increasing adoption of AI into daily life, ensuring the security of AI systems has become critical, particularly as the interactions between humans and systems can introduce new vulnerabilities. The software developed by the University of Surrey can help determine what an AI system knows, making it easier to adopt AI more securely. The researchers defined a “program epistemic” logic that enables the software to determine exactly what the AI systems know and include reasoning about future events.
The researchers hope that their software will accelerate the pace of research into creating trustworthy and responsible AI systems. The software can determine the amount of information an AI system has learned and whether it knows enough or too much, compromising privacy. This research is a crucial step in ensuring the confidentiality and integrity of training datasets.