Two very recent defining events are helping the industry see the bigger picture of the state of cybersecurity: the Verizon Business’ DBIR report and the RSA conference. Both the report and the conference reinforce the fact that cybersecurity has now reached boardroom level.
This year, yet again, one common denominator between the two was the message that organizations now do understand that being attacked is not a matter of “if” but “when”1. That awakening is good news.
Additionally, as we’ve experienced, the number of attacks are increasing, and motivations vary from opportunistic credit card theft (Target), to nation-state sponsored attacks (Anthem) to severe business disruption (yes, Sony).
Protection from external attackers fails with the weakest point in security, and once the adversary is in, it’s a game of resilience and patience. What’s the mean time to detection (average time for an organization to identify it has been attacked)? Still too long – it’s measured in weeks or months, not days. Which is just crazy. According to a number of industry sources including Gartner, it’s well over 200 days.
So, perhaps we should treat the security problem differently? What if we do a shift in mindset, from one end of the spectrum where something “will happen”, to the other end of the spectrum where something “has already happened”?
Let’s say the attackers are already inside your network. This assumption may sound like its asking to dilute precious security resources, but in fact it gives the organization an advantage in selecting the right tools for the job. Why spend money building next-gen Maginot-like defenses, if the enemy has already crossed the border?
This brings me to my next point – security may yield better results if we treat all attackers as insiders. Think about this: If you assume that attackers know our networks’ topology already, where our servers are, where our valuable data is (aka “crown jewels”), then they may have even obtained the credentials they need to access those crown jewels. To make matters worse, once the attackers get a foothold in the organization (the so-called beachhead), they have a variety of Windows operating system tools at their disposal (powershell, net, wmic) that can help them do the rest of the job. Alternatively, they can download some of the publicly available tools (like psexec). What this means is that there is really no need for new “malware“. My point here is this brings us into the domain of malware-less attacks, a whole different security problem space.
An external adversary that has stolen credentials to crack into a corporate network via vulnerable endpoint (note: the adversary doesn’t necessarily need to launch malware to gain access) and is copying data from a file server to a staging area is not different from a disgruntled employee (and it certainly helps to be the administrator in this instance) copying the latest proprietary research from a company laptop that has access to a critical server, to a USB device. Think of the intellectual property, PII and competitiveness that you never want stolen, or stored in the cloud (and the cost associated with the breach: the average is now at $3.5 Million).
In short, the external attacker is now behaving like a malicious insider. And how do we counter that threat? What information do we need? Where do we look for contextual data? How do we distinguish between malicious and benign activity? This requires a major shift in how you approach security.
Clearly, the battlefront is at the endpoint. We need tools and methods that work well against such attack scenarios - tools that do not just look at individual events but can correlate them - irrespective if the actual actor is an insider or an outsider.
As an example, suppose psexec has been executed on a workstation. We can’t tell that it’s either benign or malicious behavior without more behavioral context. Even if we pull a hash from VirusTotal, we still won’t know if it’s good or bad, since the tool itself is frequently used by system administrators.
However when we take into consideration the workstation domain name and the user account information, we start creating that context. There is an obvious difference in psexec execution by the Administrator account on the Administrator’s workstation from an employee with HR account on a workstation in HR.
The success of the security mission lies in the ability to adequately analyze behaviors as quickly as possible.
Information that is collected needs to be fresh and accurate, and teams need a lot of it. Next, there’s a need to be able to correlate between different information objects (think of process, file, network connection, registry, domain name, user security identifier) and perform that continuously, especially for high-value or targeted systems.
This is when Big Data comes into play. We need to be able to correlate information across multiple endpoints, identify attack patterns and trends, across multiple data sources. Only then we would be able to identify malicious behavior and separate it from benign behavior so we can really reduce the false-positive rate for security teams.
That’s not to say its not a good thing to be able to effectively knock down known threats that just cause static for security teams and responders up front and early in the threat lifecycle (hashes, reputation lists represent that low-hanging fruit) – before you have to launch a real-time investigation.
1: From Verizon DBIR 2015 report: “The year 2014 saw the term “data breach” become part of the broader public vernacular with The New York Times devoting more than700 articles related to data beaches, versus fewer than 125 the previous year.2 It was the year major vulnerabilities received logos (collect them all!) and needed PR IR firms to manage their legions of “fans.” And it was the year when so many high-profile organizations met with the nigh inevitability of “the breach” that “cyber” was front and center at the boardroom level.”