Over the past few years, we’ve witnessed an explosion of startups and companies clamoring about their use of Artificial Intelligence (AI) to optimize nearly everything – from predicting IVF success rates to writing emails. In fact, according to Statista, there are over 2,000 AI companies just in the United States alone.
It comes to no surprise, then, that AI is radically transforming cybersecurity as well. In this article, we’ll explore how this huge trend in cybersecurity is changing cybersecurity staffing, impacting several cybersecurity markets, and contributing to new security threats.
AI Is Changing Cybersecurity Staffing
Most of us are aware of startling predictions made regarding how advancements in Artificial Intelligence will result in sweeping job loss.
A now famous study, compiled by the McKinsey Global Institute, reported in 2017 that between 39 and 73 million jobs stand to be automated. However, the study clarifies that only 5% of current occupations will be completely automated, while in 60% of jobs, one-third of activities will be automated.
Cybersecurity will likely take the second path, as research conducted by the Ponemon Institute and the table below demonstrates.
In each of the task areas identified, AI can save security teams significant amounts of time, especially with respect to investigating vulnerabilities, patching networks, and eliminating false positives.
Cybersecurity organizations are likely to continue pursuing novel ways to use AI, especially considering that there will be an estimated 3.5 million staffing shortage in the industry by 2021.
The Impact of AI on Cybersecurity Markets
Generally speaking, Artificial Intelligence is being used in various cybersecurity markets to automate rote, low-value-added tasks and allow teams to address issues proactively by enabling better predictions.
Threat Hunting
Threat hunting is ripe for disruption by AI. Traditional techniques, such as using signatures or indicators of compromise to identify threats, work well for threats that have been previously encountered.
However, these methods are not effective for identifying threats that have not been seen before; they result in a ~90% detection rate according to Steve Grobman, SVP at McAfee. If one were to buck traditional methods and solely rely on Artificial Intelligence, detection rates can increase up to 95%, but the price paid for that improvement is an explosion in false positives. The best solution actually incorporates both AI and traditional methods, sometimes resulting in close to a 100% detection rate while also minimizing false positives.
Further, AI can be used to augment the hunting process by incorporating behavioral analysis. For instance, startups such as ReaQta are leveraging AI models to develop profiles of every application within an organization’s network by ingesting data produced in high volume by various endpoints.
Once a baseline of behavior has been constructed, the algorithms can flag statistically significant deviations as anomalies and alert security analysts that further investigation is required.
Vulnerability Management
Managing and prioritizing vulnerabilities is extremely difficult for any organization given the large volume. In 2019 alone, there have already been 2,197 unique vulnerabilities reported according to research conducted by VulnDB.
Considering that it takes over a month for the average organization to patch a critical vulnerability, enterprises now more than ever need tools to help security teams prioritize and manage vulnerabilities before they are exploited.
While traditional techniques such as vulnerability scanning are effective in identifying risk, innovative startups are using AI to go one step further.
Instead of simply waiting for hackers to exploit high-risk vulnerabilities and neutralizing them before they wreak too much havoc, companies are proactively trying to address vulnerabilities.
One such startup, CYR3CON, is utilizing AI to actually predict what attacks are likely to happen. CYR3CON’s team uses machine learning techniques to correlate multiple indicators including hacker discussions on the dark web, hacker reputation, vendor, and more to determine with high accuracy what vulnerabilities will likely be used in an attack.
Network Security
The impact that AI can have on network security is less straightforward, but will still provide huge benefits. Two aspects of network security that are traditionally time-intensive are creating security policy and understanding a given organization’s network topography.
Network security typically works by applying a set of policies that determine which connections are legitimate and should be permitted, and identifying connections that should be further inspected for malicious behavior.
While these sets of policies are often effective in enforcing a zero-trust model, the challenge lies in creating and maintaining them given the large scope of networks. A single application is typically comprised of a web, app, and database tier and exists in multiple environments including development, test, and production, meaning that each application has 9 functional groups, each of which may require its own unique security policy.
A “medium” size data center or network may have up to 500 applications and could require 4,500 different security rules, which is why implementation of network security products often requires weeks of time.
Further complicating this, most organizations lack rigorous naming conventions for workloads and applications, which means that teams need to spend even more time determining what set of workloads comprise a given application and what group they belong to (e.g. the web tier of the finance application in production).
Innovative startups in this space, including ShieldX Networks, are utilizing AI to expedite this process by learning network traffic patterns and recommending both security policy and functional grouping of workloads. Similarly, Edgewise Networks uses machine learning to model application communication patterns and recommend security policies.
New Threats using AI
While AI has greatly improved security teams’ abilities to identify, manage, and protect against threats, the technology may create new challenges.
At DARPA’s Cyber Grand Challenge in 2016, the world’s first all-machine cyber hacking tournament, it was shown that it is possible to fully automate exploit generation and attack launch. Beyond automation, AI can help adapt an exploit for a particular environment faster and help attackers search and classify data worth exfiltrating as well.
Furthermore, AI can help disguise attacks – according to Max Heinemeyer, Director of Threat Hunting at Darktrace, “we expect AI-driven malware to start mimicking behavior that is usually attributed to human operators by leveraging contextualization”.
As we’ve seen, AI is already transforming how various cybersecurity companies are responding to threats and dealing with staffing shortages. At the end of the day, AI is just like any other technology – it can be used to augment how enterprises secure themselves, or misused by attackers to breach even more enterprises and steal valuable data.
Article originally appeared on plugandtechcenter.com
Credit: Nasdaq
© De Angelis & Associates 2020. All Rights Reserved.
Kommentare