Under the AI hood: A view from RSA Conference

Elevate your enterprise data technology and strategy at Transform 2021.


Artificial intelligence and machine learning are often touted as crucial tools for automated detection, response, and remediation. Enrich your defenses with a well-curated and finely tuned collection of data, proponents insist, and let the machines drive basic security decisions at scale.

This year’s RSA Conference featured an entire track dedicated to security-focused AI, while the virtual show “floor” featured no fewer than 45 vendors hawking some form of AI or machine learning capabilities.

While the profile of AI in security has evolved over the past five years from dismissible buzzword to legitimate consideration, questions persist about its efficacy and appropriateness — even its core definition. This year’s conference may not have settled the debate, but it did highlight the fact that AI, ML and other deep-learning technologies are making their way deeper into the fabric of mainstream security solutions. RSAC also showcased new, formal methodology for assessing the veracity and usefulness of AI claims in security products, a capability beleaguered defenders desperately need.

“The mere fact that a company is using AI or machine learning in their product is not a good indicator of the product actually doing something smart,” Raffael Marty — an expert in the use of AI, data science, and visualization in security — told VentureBeat. “On the contrary, most companies I have looked at that claim to use AI for some core capabilities are doing it wrong in some way.”

“There are some that stick to the right principles, hire actual data scientists, apply algorithms correctly, and interpret the data correctly,” said Marty, who is also an IANS faculty member and author of Applied Security Visualization and The Security Data Lake. “Unfortunately, these companies are still not found very widely.”

In his opening-day keynote, Cisco chair and CEO Chuck Robbins pitched the need for emerging technologies — like AI — to power security approaches capable of quick, scalable threat identification, correlation, and response in blended IT environments. Today these include a growing number of remote users, along with hybrid cloud, fog, and edge computing assets.

“We need to build security practices around what we know is coming in the future,” Robbins said. “That’s foundational to being able to deal with the complexity. It has to be based on real-time insights, and it has to be intelligent, leveraging great technology like AI and machine learning that will allow us to secure and remediate at a scale that we’ve never been able to yet always hoped we could do.”

Use cases: Security AI gets real

RSAC offered examples of practical AI and machine learning information security applications, like those championed by Robbins and other vendor execs.

One eSecurity founder Jess Garcia walked attendees through real-world threat hunting and forensics scenarios powered by machine learning and deep learning. In one case, Garcia and his team normalized 30 days of real data from a Fortune 50 enterprise — some 224,000 events and 24 million files from more than 100 servers — and ran it through a machine learning engine, setting a baseline for normal behavior. The machine learning models built from that data were then injected with malicious event-scheduling log data mimicking the recent SolarWinds attack to see if the machine-taught system could detect the attack with no prior knowledge or known indicators of compromise.

Garcia’s highly technical presentation was notable for its concession that artificial intelligence produced rather disappointing results on the first two passes. But when augmented with human-derived filtering and supporting information about the time of the scheduling events, the malicious activity rose to a detectable level in the model. The lesson, Garcia said, is to understand the emerging technology’s power, as well as its current limitations.

“AI is not a magic button and won’t be anytime soon,” Garcia said. “But it is a powerful weapon in DFIR (digital forensics and incident response). It is real and here to stay.”

For Marty, other promising use cases in AI-powered information security include the use of graph analytics to map out data movement and lineage to expose exfiltration and malicious modifications. “This topic is not well-researched yet, and I am not aware of any company or product that works well yet. It’s a hard problem on many layers, from data collection to deduplication and interpretation,” he said.

Sophos lead data scientist Younghoo Lee demonstrated for RSAC attendees the use of the natural-language Generative Pre-trained Transformer (GPT) to generate a filter that detects machine-generated spam, a clever use case that turns AI into a weapon against itself. Models such as GPT can generate coherent, humanlike text from a small training set (in Lee’s case, fewer than 5,000 messages) and with minimal retraining.

The performance of any machine-driven spam filter improves as the volume of the training data increases. But manually adding to an ML training dataset can be a slow and expensive proposition., The solution for Sophos was to use two different methods of controlled natural language text generation that led the GPT model toward increasingly better output that was used to multiply the original dataset more than 5X. The tool was essentially teaching itself what spam looked like by creating its own.

Armed with machine-generated messages that replicate both ham (good) and spam (bad) messages, the ML-powered filter proved particularly effective at detecting bogus messages that were, in all probability, created by a machine as well, Lee said.

“GPT can be trained to detect spam, [but] it can be also re-trained to generate novel spam and augment labelled datasets,” said Lee. “GPT’s spam detection performance is improved by the constant battle of text generating and detecting.”

A healthy dose of AI skepticism

Such use cases alone aren’t enough to recruit everyone in security onto team AI, however.

In one of RSAC’s most popular panels, famed cryptographers Ron Rivest and Adi Shamir (the R and S in RSA) hit out at machine learning as still mostly not ready for prime time in information security.

“Machine learning at the moment is totally untrustworthy,” said Shamir, a professor at the Weizmann Institute in Rehovot, Israel. “We don’t have a good understanding of where the samples come from, or what they represent. Some progress is being made but until we solve the robustness issue, I would be very worried about deploying any kind of big machine-learning system that no one understands, and no one knows in which way it might fail.”

“Complexity is the enemy of security,” said Rivest, a professor at MIT in Cambridge, Massachusetts. “The more complicated you make something, the more vulnerable it becomes. And machine learning is nothing but complicated. It violates one of the basic tenets of security.”

Even as an AI evangelist, Marty understands such hesitancy. “I see more cybersecurity companies leveraging machine learning and AI in some way, [but] the question is to what degree,” he said. “It’s gotten too easy for any software engineer to play data scientist. The challenge lies in the fact that the engineer has no idea what just happened within the algorithm.”

Developing an AI litmus test

For enterprise defenders, the academic back and forth on AI adds a layer of confusion to already difficult decisions on security investments. In an effort to counter that uncertainty, the non-profit research and development organization Mitre Corp. is developing an assessment tool to help buyers evaluate AI and machine learning claims in infosec products.

Mitre’s AI Relevance Competence Cost Score or ARCCS, aims to give defenders an organized way to question vendors about their AI claims in much the same way they assess other basic security functionality.

“We want to be able to jump into the dialog with cybersecurity vendors and understand the security and also what’s going on with the AI component as well,” said Anne Thompson, department manager and head of NIST cyber partnerships at Mitre. “Is something really AI-enabled, or is it really just hype?”

ARCCS will provide an evaluation methodology for AI in information security, measuring the relevance, competence and relative cost of an AI-enabled product. The process will determine how necessary an AI component is to the performance of a product; if the product is using the right kind of AI and doing it in a responsible way; and whether the added cost of the AI capability is justified for the benefits derived.

“You need to be able to ask vendors the right questions, and ask them consistently,” said Michael Hadjimichael, principal computer scientist at Mitre, said of the AI framework effort. “Not all AI-enabled claims are the same. By using something like our ARCCS tool, you can start to understand if you got what you paid for and if you’re getting what you need.”

Mitre’s ongoing ARCCS research is still in its early stages and it’s difficult to say how most products claiming AI-enhancements would fare with the assessment. “The tool does not pass or fail products, it evaluates,” Thompson told VentureBeat. “Right now, what we are noticing is there isn’t as much information out there on products as we’d like.”

Officials from vendors including Hunters, which features advanced machine learning capabilities in its new XDR threat detection and response platform, say reality-check frameworks like ARCCS are sorely needed and stand to benefit both security sellers and buyers.

“In a world where AI and machine learning are liberally used by security vendors to describe their technology, creating an assessment framework for buyers to evaluate the technology and its value is essential,” Uri May, CEO and co-founder of Hunters, told VentureBeat. “Customers should demand that vendors provide clear, easy-to-understand explanations of the results obtained by the algorithm.”

May also urged buyers to understand AI’s limitations and be realistic in assessing appropriate uses of the technology in a security setting. “AI and ML are ready to be used as assistive technologies for automating some security operations tasks, and for providing context and information to facilitate decision-making by humans,” May said. “But claims that offer end-to-end automation or massive reduction in human resources are probably exaggerated.”

While a framework like ARCCS represents a significant step for decision makers, having such an evaluation tool doesn’t mean enterprise adopters should now be expected to understand all the nuances and complexities of a complicated science like AI, Marty stressed.

“The buyer really shouldn’t have to know anything about how the products work. The products should just do what they claim they do and do it well,” Marty said.

Crossing the AI chasm

Every year, RSAC shines temporary spotlight on emerging trends like AI in information security. When the show wraps, however, the work remains for security professionals, data scientists and other advocates to shepherd the technology to the next level.. Moving forward requires solutions to three key challenges:

Amassing and processing sufficient training data

Every AI use case begins with ingesting, cleaning, normalizing and processing data to train the models. The more training data available, the smarter the models get and the more effective their actions become. “Any hypothesis we have, we have to test and validate. Without data, that’s hard to do,” said Marty. “We need complex data sets that show user interactions across applications, data, cloud apps, along with contextual information about the users.”
Of course, data access and the work of harmonizing it can be difficult and expensive. “This kind of data is hard to get, especially with privacy and regulations like GDPR putting more processes around AI research efforts,” Marty said.

Recruiting skilled experts

Leveraging AI in security demands expertise in two complex domains — data science and cybersecurity. Finding, recruiting, and retaining talent in either specialty is difficult enough. The combination borders on unicorn territory. The AI skills shortage exists at all experience levels, from starters to seasoned practitioners. Organizations that hope to be ready to take advantage of the technology over the long haul should focus on diversifying sources of AI talent and building a deep bench of trainable, tech- and security-savvy team members who understand operating systems and applications and can work with data scientists, rather than hunting for just one or two world-class AI superstars.

Making adequate research investments

Ultimately, the fate of AI security hinges on consistent financial commitment to the advancement of the science. All major security firms do malware research, “but how many have actual data science teams researching novel approaches?” Marty asked. “Companies typically don’t invest in research that’s not directly related to their products. And if they do, they want to see fairly quick turnarounds.” Smaller companies can sometimes pick up the slack, but their ad hoc approaches often fall short in scalability and broad applicability. “This goes back to the data problem,” Marty said. “You need data from a variety of different environments.”

Making progress on these three important issues rests with both the vendor community, where decisions that determine the roadmap of AI in security are being made, as well as with enterprise user organizations. Even the best AI engines nested in pre-built solutions won’t be very effective in the hands of security teams that lack the capacity, capability, and resources to use them.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here