AI in Cybersecurity: What is Hype and What is Real?

AI in Cybersecurity: What is Hype and What is Real?

AI technology FIThe promise of artificial intelligence (AI) is compelling; a siren’s song for security managers. It is generating both interest and investment from companies hoping to leverage the enticing power of autonomous, self-learning solutions. After all, AI is already benefiting organizations in the insurance industry, in breast cancer research, in finance, and in law enforcement. So, why not security, too?

According to a recent survey from ESET new business expectations and misleading marketing terminology have generated significant hype around AI to the point where 75 percent of infosec decision-makers now see AI as the silver bullet for their security issues. Such lofty expectations, when coupled with the current reality of AI technology, will put your organization at risk. While incredibly useful in assisting human analysts, AI in isolation is no replacement for a solid information security strategy implemented by experienced analysts.

Take Facebook’s efforts to eliminate “fake news” going viral through their services, for example. The social networking platform has redeployed some of its best engineers to develop tools to track and eliminate fake news, and it’s hired some of the best AI start-ups like Bloomsbury AI. Despite these enormous efforts, Greg Marra, product management director at Facebook, admitted that, “… we can reduce viewing of fake news by up to 80 percent.” If AI solves only 80% of the “fake news” issues at Facebook, security companies clearly have unrealistically high expectations.

The hard truth is that much of the excitement surrounding AI is just hype. Having said that, there is hope for AI. Smart AI does have the potential for genius-level solutions. It is time for a reality check: what can AI do and what can it not?

AI Can Alleviate Cybersecurity Fatigue

We’ve needed to bring a technology like AI to cybersecurity for a long time now due to fundamental changes in the threat landscape and the lack of qualified candidates to fill open security analyst positions.

Over the last few years, nearly every organization has undergone a digital transformation. The term “digital transformation” involves using digital technologies to remake processes to make an organization more efficient or effective. The idea is to use technology not just to replicate an existing service in a digital form but also to use technology to transform that service into something significantly better.

Digital transformation can involve many different technologies, but the hottest topics right now are cloud computing, the Internet of Things, big data, and artificial intelligence. Beyond that, it’s a cultural change that requires organizations to continually challenge the status quo, experiment and get comfortable with failure. This sometimes means walking away from long-standing business processes upon which companies were built in favor of relatively new practices that are still being defined.

Such technologies have opened up amazing new organizational capabilities, but they have also created new complexities, interconnections, and vulnerability points – a larger attack surface – which cybercriminals have quickly learned to exploit. Traditional perimeter and rules-based approaches to cybersecurity no longer apply to the new digital organization. At the same time, human-only cybersecurity teams cannot process the daily flood of threat data resulting from all of the new technology and devices.

As it is highlighted on IBM’s Security Intelligence, security analysts are overworked, understaffed, and overwhelmed. It’s not humanly possible to keep up with the ever-expanding threat landscape, especially given the day-to-day tasks of running a security operations center (SOC).

But the benefits of improved security are compelling, including significant cost savings. According to Ponemon, organizations that identified a breach in less than 100 days saved more than $1 million as compared to those events that went undetected for more than 100 days. Similarly, organizations that contained a breach in less than 30 days saved over $1 million as compared to those that took more than 30 (but less than 100) days.

What can AI do to alleviate this situation?

AI’s speed, accuracy, and computational power offer a unique chance to protect a perimeter-less organization and to continuously process the overwhelming volume of threat data every organization now faces daily. This is because AI works very well for tedious, repetitive tasks such as looking for specific patterns. As such, its implementation can alleviate the resource constraints faced by most security operations centers (SOCs).

It can be an immeasurable benefit for intrusion prevention and detection, fraud detection, and rooting out malicious activities such as DNS data exfiltration and credential misuse. In addition, AI algorithms can be applied to user and network behavior analytics. For instance, machine learning can look at the activity of people, endpoints, and network devices like printers in order to flag malicious activity of rogue insiders.

Has AI Achieved “Magic” Status?

“Any sufficiently advanced technology,” wrote Arthur Clarke, “is indistinguishable from magic.” But this is hardly the truth about AI. Rodney Brooks notes that “AI has been overestimated again and again, in the 1960s, in the 1980s, and I believe again now, but its prospects for the long term are also probably being underestimated.” Actually, according to Brooks, AI is just another application of Amara’s Law, which states the following:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

One mistake we are making with AI is our inclination to think of it as a “magic dust” that just gets smarter once you sprinkle it on an organization. That’s just not the case. Andrew Moore, Google’s new head of Cloud AI business, said recently that “AI is about using math to make machines make really good decisions. At the moment it has nothing to do with simulating real human intelligence (HI). Solving artificial intelligence problems involves a lot of tough engineering and math and linear algebra and all that stuff. It very much isn’t the magic-dust type of solution.”

That’s very true. In reality, today’s AI algorithms are nothing more than traditional machine learning algorithms. Machine learning uses statistical techniques to give computers the ability to “learn” – i.e. use data and recognize patterns in the data to progressively improve performance on a specific task, without being explicitly programmed. A machine-learning system is a bundle of algorithms that take in torrents of data at one end and spit out inferences, correlations, recommendations, and possibly even decisions at the other end. And the technology is already ubiquitous: virtually every interaction we have with Google, Amazon, Facebook, Netflix, Spotify, et al is mediated by machine-learning systems.

As Fei-Fei Li, professor at Stanford University and former chief AI scientist at Google Cloud, said during a hearing before the US House Committee on Science, Space, and Technology, “There’s nothing artificial about AI. It’s inspired by people, it’s created by people, and — most importantly — it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”

AI Does Not Eliminate HI

AI primarily focuses on processing massive quantities of threat data. Its ability to perform these activities at near-unlimited scale, with near real-time speeds, makes it an invaluable ally within a modern, effective cybersecurity program. And these activities can be performed at every stage of cybersecurity, allowing AI to offer value before, during, and after an organization suffers an attack. But AI does not replicate human insight. It does not obviate the need for human cybersecurity experts.

As noted in the Computer Weekly article, machine learning tools are “invaluable” for malware analysis since they’re able to quickly learn the difference between clean and malicious data when fed correctly labeled samples. These engines are only as good as the data that goes into them, and merely imputing data into an algorithm will tell an analyst what’s unusual, what’s anomalous, but not if it matters. It is the data scientist who needs to know how to ask the right questions to properly harness AI’s capabilities.

This is exactly the difference between supervised and unsupervised machine learning. Current tools and technologies empower the former, but the latter is still largely out of reach. Without humans to monitor the input and output of systems and train the algorithms, it is possible for AI tools to capture and report basic system data, but it is well beyond their scope to deliver intelligent threat response plans. That coheres with a Ponemon study’s finding that 55 percent of security alerts detected by AI still require human supervision.

AI: Friend or Foe?

One of the key differences between applying AI to security data and the other fields listed at the beginning of this article is that in the security field, the data is fighting back. Because the tools for developing AI sources are widely available in the public domain, it is expected that attack-based AI technologies may become even more prevalent than those created for defensive in the coming years. So, despite the success of using AI in other disciplines, using it for security is a much more challenging task that requires a greater level of human involvement.

Mark Testoni, president and CEO of enterprise security company SAP NS2, commented:

Hackers are just as sophisticated as the communities that develop the capability to defend themselves against hackers. They are using the same techniques, such as intelligent phishing, analyzing the behavior of potential targets to determine what type of attack to use, and ‘smart malware’ that knows when it is being watched so it can hide.”

The most common attack vectors in which cybercriminals could use AI technology include:

  • Machine learning poisoning so as to circumvent the effectiveness of AI by poisoning the data pool from which the algorithm is learning, resulting in malicious activity being identified by the AI system as benign
  • Chatbot-related cybercrimes where chatbots can analyze and mimic people’s behavior
  • Ransomware facilitation
  • Impersonation fraud and identity theft
  • Gathering intelligence and scanning for vulnerabilities
  • Phishing
  • Distributed Denial of Service (DDoS) attacks

AI Deployment

Beyond what AI can actually do to shore up corporate security, companies must consider implementation. How can organizations effectively deploy AI solutions to maximize results?

For starters, companies must “stop thinking AI is magic.” AI isn’t a panacea on its own and won’t solve all of your security challenges. AI can, however, increase SOC performance. What’s necessary is for the SOC teams to understand the capabilities, and limitations, of AI-powered technologies, and to make sure they have the appropriate expectations of how they can benefit and where human involvement is still required.

AI-powered tools improve security by automatically analyzing alerts and, especially when adding malicious behaviors to the analysis, filtering out the large percentage of obviously benign ones so the human members of the team can focus on the smaller percentage of “likely malicious” activity. By using AI in this way, SOCs can significantly increase the number of high-severity incidents resolved by human security analysts and decrease the risk of a successful attack, without actually needing to increase the number of analysts.

Conclusion

AI is very important, and it will change our lives beyond what we can even imagine. But to claim that it has already happened is being naive about it. AI has tremendous potential, and current developments are improving speed and accuracy. Combined with building the proper foundations and human skills, and building experience using AI-based systems, it’s possible to maximize the existing benefits of AI and lay the groundwork for the future of machine intelligence, when AI truly does become magic.

Bert Rankin

Bert Rankin

Bert Rankin has been leading technology innovation for over 25 years including over 5 years in security solutions that prevent cybercrime. He is a frequent blogger and is often quoted in security-related articles. Bert earned his BA from Harvard University and an MBA at Stanford University.
Bert Rankin