Please note we have recently updated our Privacy Policy, effective May 24, 2018. You may view the updated Privacy Policy here.
By using this website, you consent to the use of information that you provide us in accordance with the Privacy Policy.


Missed Security Alerts are More Common than You Think, So Let’s Fix the Problem

How to Defend Against DDoS Attacks So Lets Fix the Problem explanation point
April 4, 2014 / Jeffrey Guy

The recent news about Target missing an alert from FireEye has set off a storm of new criticism from the media.

Blew it,” says Bloomberg.
Negligent,” says SC Magazine.
“Ignored for weeks,” says The Verge.

The general public, including the media and Congress, already assumed a data breach of this magnitude was impossible, unless an organization was negligent. The fact that Target’s Bangalore monitoring center alerted the company’s Minneapolis SOC was the spark on a ready fire, seemingly confirming the assumptions of negligence to an eager media.

Target doesn’t deserve the criticism. The media, Congress and the general public don’t realize missed alerts are normal and happen to everyone. If a missed alert is the measure, then all information security organizations are negligent.

The Missed Alert in Context

There are three critical questions missing from the Bloomberg report to put The Missed Alert in context:

  • How many alerts does the Bangalore monitoring center send to the Minneapolis SOC?
  • How many do they receive from the set of all active detection devices?
  • How long does it take the Target SOC to validate those alerts?

I don’t have an inside source to answer these questions, but I can make some educated guesses based on experience. The sad fact is the answers are very similar from organization to organization—and it’s not a good picture. An organization the size of Target will receive dozens of alerts per day from their detection devices. (FireEye is just one of many detection layers in the enterprise). For each alert, it takes minutes to hours to triage some, but hours to days to triage others. If the triage validates the alert, it initiates a broader investigation that takes days to weeks. Most organizations start new investigations multiple times per week.

For an organization of Target’s size, it requires tens of dedicated full-time employees just to keep up with the alerts from detection devices. Few organizations can afford this level of investment, so as a result organizations are forced to make risk assessment decisions about what alerts are ignored. The only way the CISO can improve this picture is to get additional investment from the CFO, but that’s a difficult conversation when the independent audit team certifies you as compliant with the industry’s regulations.

Compliant is not secure. Industry best practices are insufficient. We must include detection and response as a continuous process, as an integrated part of our day to day operations, but there are no external and independent sources by which a CISO can measure his organization’s true security posture.

What You Can Do: IR Automation to Decrease the Time to Complete an Investigation

Guidance that recognizes the inevitability of compromise is too large of a topic for a single blog post. I’ll offer one thought to shape your thinking: You can improve your security posture by decreasing the time required to complete an investigation.

For years, our industry has pushed detection vendors to reduce the cost of false positives. Detection of suspicious activity is an imperfect process, more closely related to Netflix recommendations than your endpoint antivirus signatures. Investigating each alert is very expensive and customers (such as Target) cannot afford to keep up with the alerts, so false positives are frustrating and costly. As a result, customers pressure vendors to reduce, reduce, reduce the false positive rates.

Pressure on vendors to improve detection algorithms is healthy, but as a customer I can gain more operational efficiency from decreasing the cost of investigations than I can gain by decreasing the false positive rates. If triaging those alerts can be done in seconds, then you not only get high confidence on the triage, you can afford to triage more alerts. This is the goal of Bit9 + Carbon Black. We are productizing incident response, by automating the tedious, time-consuming data collection and analysis. Our goal is you have all the data you need to make immediate decisions, always available, at your fingertips. Your teams can do incident response in seconds, ensuring no alert goes without triage.

Fully realize the value of your existing investments

It is unfair to blame Target for the compromise. It is unfair to blame Target for a single missed FireEye alert. Target followed industry-standard best practices and was compliant with the required guidelines and regulations. The fault lies with the security industry. We have failed to recognize the inevitability of compromise and update our guidelines, technologies and best practices to counter the threat.

Recently, I wrote about the newly emerging model for information security operations. I described an emerging consensus for information security as a continuous process. At Bit9, we are committed to enabling you to complete the lifecycle of prevention, detection and response as a continuous and integrated process. In this context, our job is to enable your team to investigate alerts as rapidly as they arrive, then feed investigation results into the enterprise-wide prevention and detection.

Don’t let alerts from your detection investments sit unused, but instead prepare your environment to rapidly respond to new alerts, fully realize the value of your existing investments.

TAGS: Alerts / bit9 / Carbon Black / fireeye / incident response / Target