Carbon Black & VMware Announce Expanded Partnership to Secure the Software-Defined Data Center (SDDC) Learn more

NSA Best Practices Now Include Application Whitelisting

defensive best practices for destructive malware include application whitelisting
Hex_Honeycomb
January 26, 2015 / Matt Larsen

The Information Assurance Directorate of the National Security Agency/Central Security Service recently released a new document titled Defensive Best Practices For Destructive Malware.

The document clearly states a concise overall strategy for preventing malware: “Prevent, Detect, and Contain.”

Forgive my brief moment of self-indulgence as I point out Bit9 + Carbon Black’s foundation of using visibility to give you: “Prevention, Detection and Response.”

I am glad to hear the NSA/CSS agrees.

The report contains some good guidelines for companies to keep in mind, but I think the NSA/CSS, other government directorates, and corporations need to shift their thinking a bit.

The document focuses on malware that can “steal or destroy data that is on the network.” The document also lists some specific high-level strategies for dealing with threat actors, most of which we have heard before—control administrative privileges, patch regularly, etc.—but there are a few that are more interesting.

I may be dealing with semantics, but I think this is a key distinction that the report misses: Important data is connected to the network, but it does not exist on the network. It lives on the endpoints—database servers, file servers, workstations, laptops, etc.

The continued focus on the network as the place to stop an attacker is outdated and insufficient. Even medieval strategists understood this concept. Sappers or siege weapons WILL eventually break a hole in your walls because it is the part of your fortress that is facing the public, so your building-by-building defense is even more important. In modern terms, that means your endpoints must also be protected.

This continued focus on the network is inevitably self-defeating. The perimeter and network fundamentals need to be there—preferably with next-gen solutions that can integrate with other security products. In that regard, I agree with the NSA/CSS’s first recommendation to “segregate network systems in such a way that an attacker who accesses one enclave is restricted from accessing other areas of the network.”

The document’s second point, to “deploy, configure and monitor application whitelisting” is a great one, and while I believe companies can do better than the examples given in the document, I am optimistic that the NSA/CSS is getting onboard with this strategy. However, all whitelisting solutions not the same, and companies should seek products that:

  1. Deliver automatic alerts
  2. Integrate with network devices such as next-gen firewalls, malware analysis devices, SIEMs, and reporting tools
  3. Monitor memory
  4. Add additional security by being able to globally ban newly discovered malware

The document also focuses on use of antivirus software and anti-exploit services, but these defenses will only stop older, well known, and previously identified malware (e.g., with known signatures) that is not dynamic. Nearly every major successful attack has created new, dynamic files for each target, making antivirus and anti-exploit software very ineffective. While I would not abandon these technologies, continuing to make a major investment in them is misguided.

Overall, this document is a step in the right direction. It is beginning to put some of the focus on the entire range of IT security required to truly stop an advanced attacker cold, and it concludes with a very effective directive on preparing for incident response and recovery.

TAGS: bit9 / contain / detect / detect malware / malware / malware defense / nsa / prevent / report / Response

Related Posts