Is Machine Learning Part of Your Security Strategy?

Machine learning technology is still an evolving area in security. But it has the potential to be a game changer.

In the world of security perimeter defenses, more is not necessarily better. This is particularly true with threat detection, where software discovering 90 million possible threats a week is really no more helpful than one that finds 9 million threats a week. Indeed, from a signal-to-noise ratio perspective, those additional discoveries may work against security, in that they make finding those 2,000 actual attack attempts more difficult. This is the much-dreaded alert fatigue dilemma.

This is the problem that machine learning—especially unsupervised machine learning—aimed to solve. The premise was that unsupervised ML would quickly learn the patterns and, thereafter, instantly recognize a true threat and distinguish it from the ever-present network noise of a large company network.

The hiccup with this theory is that unsupervised ML perimeters suffer from the same weakness as many antivirus systems: to identify a pattern of a serious attack, the system must be successfully victimized by that attack method at least once. But cyber-attack methods evolve and change over time. So, as long as cyber criminals continually develop new methods, ML defenses will never be absolute.

Still, can ML be more effective than manual human alternatives? Often, the answer is “yes.”

But first, CISOs and CSOs must understand where ML works best and where it doesn’t.

“We’ve seen different use cases. Does it work well for phishing attacks? Yes. Complex social engineering attacks? No,” says Bindu Sundaresan, practice lead at AT&T Security Consulting. “It has ways to go as it’s still a learning tool for us. The more data we feed into it, the better it gets.”

In some respects, the “it can’t stop it until it’s been hurt by it” problem isn’t fair. First, human security analysts suffer from the identical flaw. Secondly, prior experience of the attack vector assumes that the system is looking for that specific pattern. ML instead looks for pattern deviations. In other words, it’s not only looking for something that resembles an attack. It’s also looking for atypical user behavior. And that is something that software tends to do far better than mammals.

“Humans cannot possibly deal with all the alerts they’re seeing. AI will help with the triage piece,” Sundaresan says. “Most SOC (security operations center) events are measured by how long it takes to triage an event. The newer technologies will help identify the behavior and take an action on it.”

A key factor in any ML analytics security strategy is: When does it make the most sense for humans to get involved? Alternatively, how far does it make sense to push the algorithms? Sundaresan’s point about ML code taking actions raises the question: Which actions should it take? Speed is essential in thwarting attacks, so there is an argument about whether checking with humans before taking an action makes sense.

Another unknown factor here is cooperation between large companies in general, and direct competitors in particular. If all companies immediately share security incident details with a centralized source, the patterns associated with new attack methods could be identified much faster. Will companies overcome paranoia fears about any security details enough to trust an independent group with such highly sensitive information? For ML to ultimately deliver for enterprises, that trust—even on a limited basis—needs to happen.

This article originally appeared on CSOOnline.com