Defining “Normal” User Behavior: Don’t Just Ask ‘What,’ But ‘Why’

To excel at finding and stopping malicious user behavior, an organization has to understand their users’ “normal” patterns of interaction with corporate systems and data.

The bad news is there’s no single “normal” behavior for any company, department, or user. Behavior patterns vary based on the work the user is doing at that moment, the insiders and outsiders with whom they need to share information, and even their location. That’s why the process of defining “normal” starts with the development of highly detailed information about usage, from which systems users interact with and which data they touch, to how they should handle unexpected situations.

Detailed Analysis

“Start out with straightforward expectations about what you can monitor and understand about the person’s role,” says Brandon Swafford, chief technology officer of user and data security at Forcepoint. “If you can’t see the activity, you can’t monitor it, profile it, or understand if it’s normal.”

Next, break down the components of each employee’s role in enough detail to understand how they work with data as they execute various business processes. Consider, for example, an architecture or engineering firm where users create documents with pricing, material, or other information that could harm the company if it were leaked. Which applications, devices, file shares, and Software-as-a-Service (SaaS) platforms do these employees use? What are the content flows among these platforms, and with the user’s internal and external contacts? For smaller organizations, keep in mind that understanding these roles may be more difficult, as any employee might fill multiple roles.

To prevent false-positive signals, consider how these information flows change over time and how they vary based on each user’s role. An engineer working with another engineering firm on a joint project must share data with the other company. But for an employee in the business development part of the organization, for whom the other company is a competitor, sharing such information could signal a threat.

It’s also vital to consider what the user did before and after a suspicious action, such as emailing confidential information to an unauthorized external source. In this case, the user’s email application might have automatically filled in the wrong address and the user hit “send” before they noticed. If the user then told their boss, asked IT to delete the email, and asked the recipient to delete the attachment, it’s reasonable to assume they made an honest mistake. If they didn’t take any of those corrective actions and have repeatedly emailed such information to the contact, their action becomes highly suspicious.

 “Normal” Is Just the Start

Without a full understanding of “normal” behavior it’s easy to assume malicious intent among users. But this is very sensitive, as confronting innocent employees can alienate them and potentially interfere with productivity. It also wastes valuable time and money.

“Just plugging in a tool” won’t automatically help an organization understand user intent, says Swafford. They also need to be diligent with analytics and gain a deep understanding of their own processes. Creating a baseline of “normal” behavior is just the start of the process of understanding intent, he says, “because it lets you start looking for deviations. Exploring and contextualizing such deviations is where [this exercise] starts to deliver value.”

Forcepoints’s human-centric cybersecurity systems protect your most valuable assets at the human point: The intersection of users and data over networks of different trust levels. Visit www.forcepoint.com

Source: CSO Security news