by David Gibson
In virtually all organizations today, sensitive data is overexposed and vulnerable to misuse or theft, leaving IT in an ongoing race to prevent data loss. Packet sniffers, firewalls, virus scanners and spam filters are doing a good job securing the borders, but what about insider threats? The threat of legitimate, authorized users unwittingly (or wittingly) leaking critical data just by accessing data that is available to them is all too real.
Analyst firms such as IDC estimate that in five years, unstructured data, which makes up 80 percent of organizational data, will grow by 650 percent. The risk of data loss is increasing as more dynamic, cross-functional teams collaborate and data is continually transferred between network shares, email accounts, SharePoint sites, mobile devices and other platforms. As a result, security professionals are turning to data loss prevention (DLP) solutions for help. Unfortunately, organizations are finding that these DLP solutions in many cases fail to fully protect critical data because they focus on symptomatic, perimeter-level solutions rather than the fact that users have inappropriate or excessive rights to sensitive information.DLP Alone is Not a Panacea
DLP solutions primarily focus on classifying sensitive data and preventing its transfer with a three-pronged technology approach:
- Endpoint protections encrypt data on hard drives and disable external storage to stop data
from escaping via employee laptops and workstations.
- Network protections scan and filter sensitive data to prevent it from leaving the
organization via email, HTTP, FTP and other protocols.
- Server protections focus on content classification and identifying sensitive files that need to be protected before they have a chance to escape.
This approach works well if an organization knows who owns all the sensitive data and who’s using it. Since that is almost never the case, once the sensitive data is identified, which in the average size organization can takes months, IT is left with the monumental job of finding out to whom the sensitive data belongs. Who has and should have access to it? Where is it exposed to too many people? Who is using it? These questions must be answered in order to identify the highest priority sensitive data (i.e. the data-in-use) and to determine the appropriate data loss prevention procedures.
Unfortunately, DLP’s file-based approach to content classification is cumbersome at best. Upon implementing DLP, it is not uncommon to have tens of thousands of “alerts” about sensitive files. The challenge doesn’t stop here for IT. Select an alert at random – the sensitive files involved may have been auto-encrypted and auto-quarantined, but what comes next? Who has the knowledge and authority to decide the appropriate access controls? Who are we now preventing from doing their jobs? How and why were the files placed here in the first place?
DLP solutions provide very little context about data usage, permissions and ownership, making it difficult for IT to proceed with sustainable remediation. The reality is that sensitive files are being used to achieve important business objectives – digital collaboration is essential for organizations to function successfully. But, in order to do this, sensitive data must be stored somewhere that allows people to collaborate with it while at the same time ensuring that only the right people have access and that their use of sensitive data is monitored.
Context Is King
When an incident occurs or an access control issue is detected, organizations shouldn’t be required to turn their business into a panic room. Rather, solutions to prevent data loss need to enable the personnel with the most knowledge about the data – the data owners – to take the appropriate action to remediate risks quickly, in the right order. To do this, organizations need enterprise context awareness – i.e., knowledge of who owns the data, who uses the data, and who should and shouldn’t have access.
The keys to providing the necessary context lie with metadata and automation: to collect and analyze required metadata non-intrusively, to automate workflows and auto-generate reports, and to have a reliable operational plan to deploy and make use of them. With the recent advancements in metadata technology, data governance software is providing organizations with the ability to improve DLP implementations by not only automating the process of identifying sensitive data but by also showing what data is exposed, in use and who is using it – i.e. provide the needed context for comprehensive DLP. By non-intrusively, continually collecting critical metadata such as permissions, user and group activity, access and sensitivity and then synthesizing this information – data governance software provides visibility never before available with traditional DLP implementations. When data governance software is used in conjunction with traditional DLP software, implementations move faster and sensitive data is more accurately identified and protected.
With more than 23 million records containing personally identifiable information (PII) (Source: privacyrights.org) leaked in 2011 alone, it is more important than ever for organizations to ensure sensitive data is secure. Regulations such as the European Union’s recent decision to fine businesses breaching their privacy rules up to two percent of their global turnover make it an imperative for organizations to ensure their DLP practices are quick, comprehensive and continuous.
Integrating data governance software automation into existing or new DLP implementations not only ensures sensitive data is secure but it also provides a speed and scale that traditional DLP cannot achieve. Because data governance software automatically adjusts as changes to file structures and activity profiles occur, access controls to shared data are always current and based on business needs. As a result, the fundamental step to data loss prevention is addressed: limiting what data makes its way to wrong eyes, laptops, printers and USB drives in the first place. That way, efforts to further protect data via means such as filtering and encryption can be focused more efficiently on only those items that are valuable, sensitive and actively being accessed.
David Gibson is the vice president of strategy at Varonis (New York, NY). www.varonis.com