Most of us are aware of the skills shortage plaguing the cyber security industry, whether you are experiencing it first-hand or have read countless headlines.
The 2017 Global Information Security Workforce Study reveals the cyber security workforce gap is expected to hit 1.8 million by 2022. Even more eye-opening, a recent Enterprise Strategy Group survey shows only nine percent of millennials are interested in cyber security-related careers. That means we will continue to see a significant gap in security analysts, those in charge of prioritizing the most severe threats for security investigators to investigate.
The cadence between security analysts and investigators
A typical security operations center (SOC) consists of security analysts and investigators. When an event comes in, if the analyst finds it potentially harmful, she will send it to the investigator who will decide to follow up or determine the event is not a threat after all. The investigator will pass her findings down to the analyst who learns over time which events indicate actual threats vs. business-as-usual activities or false positives. As they continue to work together, the analysts get better and better at deciphering which events are worth bumping to the investigators.
The problem is that SOCs suffer from a manpower shortage, particularly when it comes to security analysts. So how can companies fill the gap? By deploying artificial intelligence (AI). I realize just saying “AI,” opens a can of worms. If you walked around the showroom floor at the RSA Conference this year, you would have seen almost everyone claiming they “do AI.” So before going into how AI can fill the security analyst gap, let’s define AI’s true purpose.
What really is “AI?”
The goal of AI is to have a machine come to conclusions similar to those a human would come to. AI simulates a human’s thought process. Enterprises have so much data coming in from their security tools, even if they had an army of humans to stare at it, they would be inefficient. There’s only so much information humans can digest, so the goal is to apply AI to artificially come to the same conclusions a human would come to if given the same input. For example, if an AI looked at a complex series of patterns indicating persistent malicious behavior over time, it would identify the threat within seconds. A human would need to study tens of thousands of lines of data in an excel spreadsheet to potentially find the pattern, a task that is inefficient and not probable. In other words, AI is scaling humans. That is AI.
AI complements security investigators
Going back to the SOC, AI can complement security investigators by serving as security analysts. For example, an event comes in which an AI determines is a significant threat that needs to be prioritized. The machine then sends it to the investigator who affirms the event needs to be investigated or it’s a business-as-usual activity. Using supervised machine learning, the AI learns from its “supervisor,” the investigator. The investigator says, “Yes, that was a good find,” or “No, that wasn’t a good find.” The AI learns from its successes and mistakes and applies the insights to the next time it evaluates an event. The objective is for the AI to become smarter over time, eventually being so precise, that investigators are only spending time investigating the most critical threats.
Human judgement is irreplaceable. I don’t foresee a day when the machines will take over the world, at least not in cyber security, because we will always need a human to make the final determination if an event needs immediate investigation. However, by using AI to fill the security analyst responsibilities, investigators can work more efficiently and effectively. AI is providing human investigators with the highest quality, most likely threats to pass judgement on, as soon as they are identified, before damage is done. Investigators should not have to hunt for events to mitigate; they should automatically receive those events in a prioritized order based on risk.
The existing process of having humans find a pattern of events that may indicate an active threat will never work effectively. The mountain of data coupled with the perpetual skills shortage prevents this success and can only be overcome with the use of AI. For every case presented to investigators, ask yourself, “Was this truly worthy of my investigators’ time?” In too many cases, the answer is “no.”
This article originally appeared on CSOOnline.com