According to an Avertium survey, more than two-thirds of respondents believe artificial intelligence (AI) and machine learning (ML) applications are more capable than humans of resolving cybersecurity threats.
Is that level of confidence earned? Yes and no.
A deeper examination of artificial intelligence in cybersecurity and a more profound understanding of precisely what AI can and cannot do (with context) is required to determine whether this optimism is entirely warranted.
Enterprise AI cybersecurity applications compile vast amounts of data from a wide spectrum of sources and perform correlating calculations that analyze behaviors and provide insights into anomalies, potential malicious activities, and emerging cyberattacks. The insights generated can be used to improve decision making, which leads to more reliable threat detection, better prevention strategies, and more resilient network security.
AI deployed as a cybersecurity solution supports an organization's analysts and engineers entrusted to make timely and accurate decisions about incidents. This affords security teams the time and resources to focus on high-level decisions that require intuition and creativity, as well as creates a framework for continuous improvement of security performance.
Artificial intelligence is currently capable of performing very specific, human-defined tasks. AI cannot “think” creatively or make complex decisions based on data inputs: performing actions beyond those specifically designated by the human security team is not (yet) possible.
In his aptly named article The Problem With AI: Machines Are Learning Things, But Can't Understand Them, Chris Hoffman of How-To Geek notes:
“The artificial intelligence we do have are trained to do a specific task very well, assuming humans can provide the data to help them learn. They learn to do something but still don't understand it.”
This spotlights the fact that AI cannot do the job for us humans: we cannot “set it and forget it” when it comes to applying complex understanding to or executing strategic cybersecurity decisions. AI's role is as a tool, rather than a partner.
So, where exactly does AI fit within a cybersecurity role? There's no disputing AI performs valuable functions in a fraction of the time humans could accomplish them, if ever. For example, at Avertium, we embrace AI technology and contributions for Tier 1 capabilities at scale.
As an MSSP, we use data from our collective customers in aggregate to amplify our AI technology data processing operations.
The result is insights and support that can be applied for the good of all our customers and partners.
Related reading: Why Your Customer Service Needs Chatbot Security
Current AI applications are superb in the following cybersecurity roles:
Adjusting security systems to meet the increased scale of threats. The cybersecurity environment for enterprises has evolved far beyond a human-scale challenge. Even relatively small organizations must defend a vast attack surface that includes thousands of individual devices and applications, monitor hundreds of potential attack vectors, and analyze masses of incoming data. AI allows enterprises to meet those challenges of scale efficiently and affordably.
Assisting under-resourced security operations to manage threats. AI can be a force multiplier that helps security teams punch above their weight. By absorbing the most labor- and resource-intensive security functions that involve rapid, high-volume data analysis, AI enables security teams to maintain focus on their most mission-critical security functions.
Helping security engineers make faster, more informed decisions. AI can process immense volumes of data with unmatched speed. In an increasingly crowded threat environment, AI helps cut through the clutter of alerts and notifications by bringing forward only the most critical data and insights, so security teams can respond quickly to investigate indicators of compromise.
The current relevance of AI in cybersecurity applications is clear: to provide value, AI must be combined with the reasoning, creative and anticipatory capabilities of the human brain, along with the experience and insights of a seasoned security team. An example of how Avertium uses AI within its cybersecurity processes involves pattern recognition in very large datasets.
Within our high-velocity CyberOps Centers of Excellence, we process billions of events per day, a volume unmanageable without algorithmic efficiencies we have tailored to our needs. Our orchestration platform which consumes billions of events from customer environments is front-ended by an ML engine which filters much of the noise from the feed our human analysts then apply their investigative expertise. This layer of pattern-recognition based filtration draws the analyst volume down from billions of events to tens of thousands of alarms. Expert analysts handle the ‘last mile’ of contextualizing the alarm into relevant actionable information for our customers.
AI will eventually present its own challenges that the cybersecurity industry must diligently identify and resolve. Hackers are already using newly discovered vulnerabilities to attack AI systems, and are even deploying AI applications themselves as emergent threats. Weaponized AI technology is being used to amplify and expand the scope of attacks.
Forrester's Predictions 2020 points out that bad actors can adopt new technologies such as AI/ML faster than security leaders can by leveraging superior scale: there is a greater volume of attackers, armed with more sophisticated tools, and aimed at a larger attack surface.
Supporting this cautionary note, Isaac Ben-Israel, the Director of the Blavatnik Interdisciplinary Cyber Research Center (ICRC) at Tel Aviv University, warns that organizations must not assume their systems are safe simply because they implemented AI, but must instead become more proactive in protecting themselves and remain ever watchful for threats:
“Hackers are 'making friends' with our systems, it's time to break them up […]. Battling this problem and ensuring that advanced AI techniques and algorithms remain on the side of the good guys is going to be one of the biggest challenges for cybersecurity experts in the coming years.”
While AI and ML are each superb tools for specific use cases (pattern recognition in very large datasets, for example), these technologies are not a viable replacement for the capabilities a skilled analyst or experienced security team brings to the table. A human has the ability to think laterally to solve problems; AI can't.
Machine learning can be trained to perform a specific set of computational tasks extremely well, however, the nuance and elasticity of the human mind allow for the contextualization of a problem and its solution to the unique use case it manifests within. An equivalent AI capability remains solely at the far frontiers of science fiction and Hollywood storytelling.
AI is not the ultimate cybersecurity solution, despite what buzzy articles and over-promising security companies present. However, AI-powered cybersecurity does enable current best-in-class performance and superior results.
Are you ready to explore a better way to handle the massive numbers of alerts and alarms coming into your cyber operations? Reach out to start the conversation.