In a move sure to please security intelligence advocates, the Senate voted in January to reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA) for another six years. The surveillance act passing will be seen as a victory for Republican national security hawks as well as centrist Democrats who see FISA as an important tool in maintaining America’s IT security in the face of constantly evolving cyberattack methods. Senate Intelligence Chairman Richard Burr was quoted as saying: “If you look at the threat matrix today, it’s worse than it was six years ago. It’s more global, it’s more specific, it’s the reason that we need this program. I think more and more members realize that.”
PwC’s 2018 AI Predictions report suggests that developing artificial intelligence (AI) has the potential to be a powerful tool in delivering ever-more effective security intelligence analytics, as well as shoring up IT security defense. But unfortunately there’s a flipside to that, as that same developing AI can also be used to drive more effective cyberattacks.
Machine learning is at the heart of AI advancement, and in 2018 we can expect AI-driven infrastructures to get smarter and more practical. Consumer products such as smart home management systems may be the visible leading edge of AI in the modern world, but this isn’t really where its true potential lies – that’s in machine-learning-driven analytics, with the ability to dive deep into big data sets and recognize patterns and trends that human analysts might miss. The application of that ability to security intelligence and prevention of cyberattacks is obvious.
According to PwC, AI will fill an emerging gap in network cyberdefense, as human IT security teams increasingly struggle to keep pace with both the variety and volume of attacks. Importantly, AI can improve the accuracy and speed of security intelligence analytics, quickly recognizing and categorizing threats before designing and deploying a response.
As we noted earlier, however, this is a double-edged sword when it comes to IT security. Malicious actors – including not just cybercriminals but also cyberterrorists and foreign nation-states – will have access to the same maturing AI resources, providing tools that can learn from both successful and unsuccessful attacks in order to develop and deploy new attack strategies.
The lesson from all this? For both government and companies alike, an AI arms race is already in progress, and this is only going to intensify in the coming months and years. The WannaCry ransomware attack in May 2017 affected hundreds of thousands of computers in more than 150 countries, with both government bodies – including Britain’s National Health Service – and private companies targeted. The reality is that further AI-driven cyberattacks are inevitable, and we have reached the stage where companies should consider employing AI-driven security intelligence analytics to minimize the chance of a serious IT security breach.