Microsoft has developed a tool for checking the security of AI systems
Microsoft continues to improve its cybersecurity infrastructure and encourages its partners to do the same. According to the software giant, not enough attention is paid to the security of platforms using artificial intelligence technologies, so the company has developed a special tool for testing the security of such systems. It was named Counterfit.
Counterfit is an open-source automated security testing tool for organizations’ AI systems. The task of the tool is to provide a company using AI technologies with confidence that their AI system is reliably protected from external attacks. According to Microsoft, of the 28 organizations surveyed, 25 said they did not have the right mechanisms to protect their AI infrastructure. Their digital security specialists were not equipped with the tools to deal with such threats.
Initially, Counterfit was a set of special scripts that could be used to simulate attacks on various AI models. Microsoft initially used these scripts for internal tests, but over time, Counterfit has evolved into an automated tool that allows test attacks on multiple AI models at once. The company says Counterfit has become an integral part of its security testing program for its existing AI platforms and products in development.
The advantage of Counterfit is that it can be used in any environment, on any AI model. It can be used to test the effectiveness of the security of AI systems on a local server, at the interface between the server and the network, as well as on any cloud platforms where AI models are used based on the processing of incoming data, in almost any format except text and images.
The company points out that Counterfit will be easy to use for digital security professionals working with tools such as Metasploit or PowerShell Empyre. It can be used to test the resilience of penetration protection as well as to scan for vulnerabilities. During the simulation of an attack on the AI model, Counterfit creates logs that can be read by experts in the future and then use this information to improve their security for AI.