The Buzz at RSAC 2019: Has AI Been Reduced to Snake Oil?
The expansion of Moscone Center is complete, and despite the rain, the overall enthusiasm at last week’s RSA Conference was high. With an estimated 50,000 attendees visiting the displays of 700 vendors, the expo was alive with conversations and demonstrations of the latest cybersecurity technologies. This was certainly the case for Lastline®, where our booth was crowded throughout the three days the expo was open, with a high level of enthusiasm for our newly announced cloud security capabilities, Lastline Defender™ for Cloud.
AI as Snake Oil
But there was one theme that’s a bit troubling. Across several conversations with attendees, media, and analysts we heard how artificial intelligence (AI) is being touted by seemingly everyone, resulting in it being viewed as little better than the snake oil that was hyped as a cure-all in the 19th century.
This is troubling because it could lead to an overall disenchantment with AI – what Gartner might show on one of their hype cycles as descending into the “trough of disillusionment” – despite the technology’s capabilities and potential for significantly improving the detection of advanced cyberattacks.
We at Lastline know a bit about AI. We’ve utilized machine learning, deep learning, expert systems, and other AI capabilities since Day 1, which was about 15 years ago. My fellow co-founders, Drs. Chris Kruegel and Engin Kirda, and I are all Professors in Computer Science, and our research has focused many times on the applications of AI and machine learning (ML) to computer security.
The AI Bandwagon
The challenge the cybersecurity industry faces in regards to AI is the overhyped expectations. Too many vendors are jumping on the AI bandwagon, with little experience with AI technology, and are setting unrealistic expectations for what is possible. Here’s the reality.
AI is not a silver bullet. It will not automatically detect advanced attacks and know how to remediate all of them, all on its own. It is, however, a powerful tool that can analyze vast amounts of data, quickly, and immediately dismiss the majority of events that are clearly benign. This leaves a relatively small subset of events for understaffed security teams to manually investigate.
However, many of the techniques used in AI and ML have been developed for domains other than security. In fact, most ML techniques have been developed in fields such as computer vision, natural language processing, and signal processing, and, in these fields, the analyzed information® (images and text) does not actively fight against the learning process.
ML Techniques Need To Be Extended
However, this is exactly what happens in security. Threat actors continuously try to modify their malware samples so that they are (mis)classified as benign, while intruders try to cover their tracks by using traffic patterns similar to those regularly observed in the target network to avoid being identified as anomalous. Therefore, applying ML to security is not a traditional AI use case. To be effective, ML techniques need to be extended so that it is possible to perform adversarial machine learning, which is an ML-powered system tuned to detect an ML-based attack. Failing to do so will make it easy for a motivated adversary to bypass ML-based products.
So, AI can be a tremendous time saver when implemented correctly. This last qualifier is key – when implemented correctly. Getting real value out of AI requires informed expectations about what is possible, careful and sustained training of the AI algorithms, abundant and diverse data for training, model quality control, and human oversight. While it might get there someday – several years from now – AI is not yet at the stage where it can fully and completely emulate the thought process and analytic capabilities of a trained security analyst.
AI Done Right
Lastline applies AI to both network traffic analysis and our knowledgebase of malicious behaviors, enabling our technology to discern between benign anomalies and malicious ones by understanding the context of the activity. Applying AI to the combination of network traffic and malware behaviors is what we call “AI Done Right.” Learn more about how Lastline uses AI to defeat advanced threats.
Latest posts by Giovanni Vigna (see all)
- Countering the Rise of Adversarial ML - October 16, 2019
- Network Security Challenges Create a Commercial Imperative for AI - May 9, 2019
- The Buzz at RSAC 2019: Has AI Been Reduced to Snake Oil? - March 14, 2019