Happy Anniversary! GDPR One Year Later

Happy Anniversary! GDPR One Year Later

It’s been a year since we — and many of you — went live with enhancements to our privacy and security programs tied to GDPR, and two years since we started the GDPR journey. That’s why it’s a great time to look back at the impact GDPR has had on the way we do business.

This post is purely for general information purposes and is not intended as legal advice. This blog gives a glimpse into Code42’s early GDPR implementation. We, along with GDPR as well as other national and international privacy rules, will continue to evolve and mature.

“ The GDPR journey shouldn’t be a one-department initiative or the sole responsibility of Legal or Security. It must be a business-driven initiative with Legal and Security providing recommendations and guidance. ”

What we did to get ready for May 2018

We started preparing for GDPR around May 2017. The GDPR journey shouldn’t be a one-department initiative or the sole responsibility of Legal or Security. It must be a business-driven initiative with Legal and Security providing recommendations and guidance. At Code42, we established a cross-functional group comprised of Legal, Security, IT and system subject matter experts. The key activities of this group were to:

  1. Create an inventory of applications in scope for GDPR. We have European employees and customers so we had to look at applications that were both internal and customer-impacting. When outlining in-scope applications for GDPR, we kept in mind that more restrictive data privacy laws seem imminent in the U.S. We also conducted a cost-benefit analysis to determine whether we should keep non-EU PI in scope now or revisit it at a later date.  
  2. Define retention periods for all of the applications in scope. Prior to our GDPR journey, we had a retention program in place, but it was largely focused on data we knew we had legal, regulatory or other compliance obligations around, including financial records, personnel files, customer archives and security logs. GDPR just gave us the nudge we needed to mature what we were already committed to and have better conversations around what other data we were storing and why.
  3. Figure out how to purge personal data from applications. This may be challenging for SaaS organizations. When applications are managed on premise, it’s much easier to delete the data when you no longer need it. But translating that to all your SaaS applications is another story. There are a few areas where SaaS applications are still maturing compared to their on-prem counterparts, and data deletion appears to be one of them. Delete (or anonymize) data, where you can. Otherwise, either add the applications to a risk register, requesting that the application owner do a risk accept and submit a feature request to the vendor, or look for a new vendor who can meet your retention requirements.
  4. Create an audit program to validate compliance with our security program. We are fortunate to have an awesome internal audit program that monitors effectiveness of our security program, among other IT and technology-related audit tasks. So it was logical to test our in-scope applications against our newly defined retention requirements. We review applications periodically.
  5. And lastly, but just as important, define a process for data subjects to request that their information be deleted outside of a standard retention schedule (aka “right to be forgotten”). It is important to remember that this is not an absolute. While we want to honor a data subject’s request as much as possible, there may be legitimate business cases where you may need to maintain some data. The key for us was defining what those legitimate business cases were so we could be as transparent as possible if and when we received a request.

What we’ve learned in the last year

So what have we learned about GDPR one year and two internal audits later? A lot. 

What’s going well

1. A vendor playing nice

We had a really great success story early on with one vendor. When we dug into it, we found that our users were previously set up with the ability to use any email address (not just a Code42 email). We also learned our instance was configured to save PII that wasn’t a necessary business record. Based on that conversation, we were able to make a few configuration changes and actually take that application out of scope for GDPR! 

2. A more robust application lifecycle program and greater insight into the actual cost of a tool

As a technology company that is continually innovating, we want to empower our users to use tools and technologies that excite them and increase productivity. At the same time, we want to ensure we are addressing security, privacy and general business requirements. Users often find tools that are “so cheap” in terms of the cost of user licenses. Our new Application Lifecycle Management (ALM) process, however, gives us a better sense of the actual cost of a new tool when we factor in:

  • Onboarding requirements: Think Legal, Security, IT, Finance. Are there compliance requirements? Do we already have similar tools in place?
  • Audit requirements: Will this be part of the GDPR data retention audit, user access audit or other application audit?
  • Stand-up/stand-down requirements: Will it integrate with single sign-on solution? How does it integrate with other tools? How is data returned or destroyed?
  • Support requirements: Who are users going to contact when they inevitably need help using the tool?

When the person making the request can see all of the added costs going into this “inexpensive” tool, it makes for easier discussions. Sometimes we’ve moved forward with new tooling. Other times we’ve gone back to existing tools to see if there are features we can take advantage of because the true “cost” of a new solution isn’t worth it.

3. A great start toward the next evolution of privacy laws

On the heels of GDPR, there has been a lot of chatter about the introduction of more robust state privacy laws and potentially a federal privacy law. While future regulations will certainly have their own nuances, position yourselves to comply with them in a way that will require just small tweaks versus major lifts like the GDPR effort.

What’s not working

1. What exactly IS personal data?

We have had a lot of conversations about what data was in scope… and I mean A LOT. According to the GDPR, personal data is defined as any information related to an identified or identifiable natural person. That puts just about every piece of data in scope. And while it may seem like an all-or-nothing approach may be easier, consider risks that could affect things like availability, productivity, retention, etc. when implementing controls, then scope programs appropriately to address those risks in a meaningful way. 

2. “Yes, we are GDPR compliant!”

One thing we realized very quickly was that it wasn’t enough to simply ask our vendors if they were “GDPR compliant.” We ended up with a lot of “Yes!” answers that upon further investigation were definite “No’s.” Some lessons learned: 

  • Understand the specific requirements you have for vendors: Can they delete or anonymize data? Can they delete users? 
  • Whenever possible, schedule a call with your vendors to talk through your needs instead of filing tickets or emailing. We found it was much easier to get answers to our questions when we could talk with a technical representative.
  • Ask for a demo so they can show you how they’ll delete or anonymize data and/or users. 
  • Don’t rely on a contractual statement that data will be deleted at the end of a contract term. Many tools still aren’t able to actually do this. It’s important that you know what risks you are carrying with each vendor.
  • Audit your vendors to ensure they are doing what they said they would. 

Would we do it all over again?

Actually, yes. While our GDPR project caused some grumbling and frustration at the beginning, it has now become an integrated part of how we operate. There is no panic and no annoyance. Instead, there are lots of great proactive conversations about data. At the end of the day, we have matured our tool management, and our privacy and security; and our data owners feel a stronger sense of data ownership.

Wanna see a sample of our Application Lifecycle Management (ALM) vetting checklist? 

Code42 Bring Your Coder to Work Day 2019

Code42 Builds Security Workforces From the Ground Up While Connecting With Our Community

An ongoing conversation within security and technology, is how to create more diversity in the workplace. To do so, it’s imperative to introduce science, technology, engineering and math (STEM) activities to all kids at a young age and spark an interest in a future career in the field. At Code42, we’ve been working to help change that narrative through a variety of in-house initiatives and outreach activities. We’ve worked with the Girl Scouts on coding and cyber badges, sponsored numerous women in tech events, including Minnesota Women in Tech and NCWIT Aspirations in Computing, and this summer we’ll be hosting a week-long App Camp For Girls and gender non-conforming kids.

“ Kids gain a better understanding of the role technology plays in their day-to-day lives, and how they all can help shape present and future technologies. ”

But one of the perennial favorites for both the kids and grownups at Code42 is our annual Bring Your Coder to Work Day, the Code42 version of Bring Your Child To Work Day. Turns out this annual event is also one of the best opportunities for outreach in helping shape future generations of kids interested in STEM careers.

This year, on April 25, approximately 200 future coders from ages 0-18 years descended on the Code42 headquarters in downtown Minneapolis to participate in a day of learning led by current Code42 employees, also  known as Guardians. The event, now in its fifth year, is a fun way for kids to learn and get excited about careers in technology.

Starting out young, our littlest guardians (0-5 years) gain familiarity with coding basics by playing Robot Turtles board games while experiencing the unique office environment of mom’s or dad’s tech company (Juice in the fridge! Cereal bar! Bean bag chairs!).  

From there the kids progress with their knowledge by age group and take part in a variety of coding activities, including:

  • Dash the Robot & Scratch: teaching young kids about the basics of algorithms and writing instructions for computers – which is then brought to life by completing various challenges with Dash the Robot.
  • Joke Machine: Kids learned the basics of HTML and CSS to create their own website with their best jokes. (i.e. Q: What does a baby computer call its father? A: Data)
  • Arduino: Kids learned the basics of C programming, circuitry and problem-solving with Arduino kits.
  • Picade: This session focused on how to assemble a hand-built arcade gaming system using a Raspberry Pi.
  • Capture the Flag (CTF): And new this year, the oldest kids took part in a specially designed CTF exercise with our Security Team. Kids learned about ethical hacking and how to solve problems without having a clear roadmap from which to work.

The day provided STEM and cybersecurity learning opportunities in a fun environment for kids of all ages and backgrounds. In addition to adding mom/dad cred, kids gain a better understanding of the role technology plays in their day-to-day lives, and how they all can help shape present and future technologies.

I’ve witnessed firsthand the value of this day. My daughter took part in her third Coder Day this year – a day she looks forward to and talks about throughout the year. She loved getting to make her robot “dance” and left the office asking if she could get her own robot so she could continue to practice coding.

This day has imparted a sense of confidence and empowerment in her. I’ve overheard her in conversations with both grown-ups and kids, when someone brings up a problem they are having, she jumps in with, “My mom can fix that, she’s a coder! And someday I’m going to be a coder, too.”  Of course, that sort of response makes me feel like a bit of a superhero, but moreover, it encourages me that the lessons she learns from Coder Day are foundational building blocks that demonstrate to her the power to solve problems lies with her, not someone else.

I look forward to seeing this generation of diverse coders continue to grow and re-shape the world of security and technology that we know today. Beyond that, Coder Day is simply SO rewarding and tons of fun!

Code42 Coder Day 19

Tips From the Trenches: Automating Change Management for DevOps

One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.

“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”

Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:

  • PREVENT issues (collusion, duplicate changes)
  • CONFIRM changes are authorized changes
  • DETECT issues (customer support, incident investigation)
  • COMPLY with regulatory requirements

Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.

Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.

We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.

This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.

New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.

“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”

So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.

But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.

We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.

Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.

Connect with Michelle Killian on LinkedIn.

Finding Malware that Prevention Tools Miss (Video)

Hunting for known malware

All security teams have their go-to industry intel sources for brand-new indicators of compromise (IOCs), and like you, we’re continually on the lookout for new threat intel tools to look for the footprints of malicious activity. But once you’ve identified a suspicious file or confirmed a malicious MD5 hash, the challenge for your security team is finding all the hosts in the organization that have the affected files. This kind of visibility is critical for mitigating any potential malware impacts, but it’s also critical to avoid wasting time cleaning uninfected hosts. Without this visibility, organizations are forced to take a “better safe than sorry” approach — and that leads to the frustrating situation where endpoint re-images or remediations are performed without knowing whether devices were actually infected.

A simple search bar changes everything

Security teams deal with questions — big and small — all day long. The simple search bar of Code42 Forensic File Search is a powerful tool for answering some of the most important questions, including, “Does known malware have a foothold in my environment?” But the usefulness of Code42 Forensic File Search isn’t limited to just finding malware. In the Code42 security team, we use Code42 Forensic File Search for malware investigations and monitoring. When our antivirus and EDR tools identify malware threats, we use Code42 Forensic File Search to validate those findings across the environment and dig deeper. After malware has been located on a device and remediated, we continue to monitor files on that device with Code42 Forensic File Search to ensure there are no further signs of infection.

With the ability to instantly search for known malicious MD5 hashes across every host in your environment, you can shave days off investigating and remediating malware events. More importantly, this complete, instant visibility gives you the assurance that you’ve identified and addressed the threat to the full extent.

Happy threat hunting!

Finding Files in the Wild: From Months to Hours

Every day, your organization faces a variety of data security challenges. Many come from outside your organization, but a significant number also come from within. There are countless reasons why someone may take sensitive data from your organization, many of which are purely opportunistic. For example, what if a file with sensitive financial information is mistakenly emailed to an entire company? That may prove too tempting an opportunity for some. How can your organization respond when this happens? In this post, I’ll discuss how the response process often works today—and how it can be streamlined with Code42 Forensic File Search.

A true story

Here’s a true story of an IT team working through just such a challenge: At this organization, the HR team used Microsoft Excel for management of financial information such as bonus structures and payout schedules. By mistake, a member of the team sent an email containing an Excel file with compensation information for the entire staff to the whole company, instead of the select few who were supposed to receive it. Over 6,000 employees worldwide received the email.

Fortunately, the most sensitive information was contained on a hidden tab in the Excel file, and most employees never even opened the file. The IT team was able to recall the email, but the legal team needed to know who in the company had downloaded and opened it, in case the information within was ever used in a lawsuit. The IT and Security teams were tasked with finding every copy of the file in the organization.

A painful two-month process

While recalling the email cut the number of potential endpoints to search to around 1,000, the IT team still had to search all those devices—many of which belonged to individuals at the organization’s international offices. The IT team used a Windows file searching utility to crawl the user endpoints in question, searching for the name of the file. However, Outlook’s email client can scramble names of files, so the IT team also had to scan for any Excel file in the Temp folder of each machine, and open those files to visually confirm that it wasn’t the file in question.

Each scan would take between one and eight hours, depending on the size of the drive—and the scan could only be run when the target endpoint was online. If a laptop was closed during the scan, the process would have to be restarted. If a device was located in an international office, the IT team would have to work nights in order to run the scan during that office’s working hours.

The process was a tremendous hit to productivity. The IT team tasked fully half its staff to running the scans. Two of the organization’s five security team members were tasked with overseeing the process. Even the legal team’s productivity was affected. Since the IT team had to open every version of the file to verify the sensitive financial data within, the legal team had to draw up non-disclosure agreements for every person working on the project.

All told, the search for the mistakenly distributed financial file took the organization two months, and the IT team estimated that they had only recovered 80 percent of the instances of the file.

“ With Code42 Forensic File Search, administrators can search and investigate file activity and events across all endpoints in an organization in seconds. ”

A better way: Code42 Forensic File Search

Fortunately, there is a better method for locating critical files in an organization. With Code42 Forensic File Search, administrators can search and investigate file activity and events across all endpoints in an organization in seconds. In the case of this Excel file, the IT team could have used Code42 Forensic File Search to search for the MD5 hash of the file. By searching for the MD5 instead of the file name, Code42 Forensic File Search would locate all instances of the file across all endpoints, including versions that had been renamed in the Temp folder or renamed to intentionally disguise the file. This single search would find all copies of the file, even on endpoints that are offline.

The feature video demonstrates Code42 Forensic File Search in action. The IT team member that shared this story is confident that it would have played out very differently with Code42 Forensic File Search. “Had we had Code42 Forensic File Search deployed, that project was probably done in a couple hours,” he said. “We would have cut two months to a couple hours.”

Is GDPR-Regulated Data Hiding in Pockets of Your Organization?

Data breaches that compromise critical customer information are the worry that keeps IT people up at night. Unfortunately, what’s considered critical customer information and what you must do to safeguard it has changed dramatically, thanks to GDPR. IT stakeholders at American companies who’ve assumed GDPR does not apply to them may want to take a closer look at what the implications are for U.S.-based companies. GDPR-regulated data can be found in places you might not expect, and the tools you’ve been using to keep track of that data may not provide the visibility you need in case of a breach.

Where does GDPR apply?

First off, don’t think because you’re an American company only doing business in the U.S. that you’re exempt. If you capture any data about an E.U. citizen, like one who stumbles across your website and sends a question through a contact form, you’re on the hook for GDPR.

So where does the data regulated by GDPR live in your organization? The short answer: everywhere your customer data lives and travels within your organization. That doesn’t just mean your CRM system. Employees routinely download and use personal customer information on their endpoint devices, even when company regulations forbid it. You may or may not be surprised to learn that the C-suite is the worst offender at this.

The scope of what is considered “personal information” under GDPR is much broader than you might expect. While most companies already take steps to protect sensitive information like credit card information or social security numbers, GDPR takes it much further and could signal a sea change in data collection. Specifically, any information that can be used to identify a person, like IP addresses and names, is covered under the regulation; however, GDPR is expanding the definition of sensitive data to include any data that could potentially identify a person. So, if you’re capturing it, it’s worth protecting.

What does data encryption protect against?

Many IT directors hit the pillow every night with the misguided confidence that their data encryption will prevent any GDPR-related problems. Unfortunately, that’s not always the case.

Data encryption is a useful tool if your data compromise doesn’t include credentials that unlock the encryption. But if your data is compromised because of stolen credentials, then encryption doesn’t matter. This can happen with stolen laptops, a common occurrence with company-issued employee laptops. It can also happen with malicious employee activity – if employees with valid credentials decide to exfiltrate data, encryption won’t do a thing to stop them.

What happens after a data breach?

Talk about sleepless nights for an IT director. For companies that experience a data breach, the hours and days after discovery are usually a mad scramble to assess what’s been compromised and by whom. The time and money spent to unravel the tangles of compromised data in an organization can add up fast. And GDPR doesn’t give you much time. You have 72 hours after discovery of a breach to notify GDPR authorities if personal information has been affected.

The problem for most companies is that they don’t really know where all their customer data is stored. A lot of it can end up on employee laptops and mobile devices. To truly protect their data assets, companies must have a firm understanding of where all their data travels and lives.

Data visibility

Being able to immediately and clearly locate customer data is critical to surviving a breach of GDPR-regulated data. A strong endpoint visibility tool can provide a quick understanding of all the data that has traversed through an environment—and importantly for GDPR, whether that data contains personal information.

An endpoint visibility tool can also tell you with confidence if compromised data does not include personal information that would fall under GDPR. That would prevent you from unnecessarily alerting the authorities.

Unfortunately, data breaches continue to happen, and there’s no sign of that abating any time soon. When the collection of consumer data is necessary, companies should consider it sensitive and use endpoint visibility tools to protect it.

Decoding the 72-Hour GDPR Doomsday Clock

Decoding the 72-Hour GDPR Doomsday Clock

The GDPR 72-hour reporting requirement has notable similarities to the insane ultra-marathons elite athletes run in the same time period. The 72-hour time limit requires companies to cover ground they’d typically take weeks or even months to traverse—kind of like running more than 300 miles in three days.

With data stored in unexpected places, that 72 hours can get eaten up quickly in trying to sort through where compromised data is stored. But with a robust endpoint visibility tool, which allows a response team to see the content of endpoint data clearly, the GDPR clock doesn’t have to spell doom.

What should you do if you discover a data breach?

Round up your response team. Depending on the size of your business, your data breach response team may include several dedicated personnel in addition to other key company stakeholders, or it may be a few individuals who do this along with their other duties.

Gather key information. Figure out what happened, what was the cause of the breach, and what type of data was compromised. This step is where companies that don’t have an endpoint visibility tool will see precious hours of their GDPR clock tick away as they try to determine what data was compromised. An endpoint visibility tool that provides clarity on the content of data will answer that question with confidence.

What if no personal information was compromised?

If, after using an endpoint visibility tool or another assessment process, you ascertain that no GDPR-regulated data was involved, breathe a sigh of relief. You don’t need to notify the GDPR authorities. You should, however, continue through your plan: clean up the data breach, close the holes that caused it, and notify any impacted customers.

What if GDPR-regulated data was affected?

Then the clock retroactively starts ticking from the moment you first discovered the data breach, and you notify the GDPR authorities. When you alert the regulators, it’s best to have all your ducks in a row. If you can tell the authorities exactly what happened, who was involved and your plan to remediate it, you will be better positioned to resolve the issues. An endpoint visibility tool will provide you with the information necessary to make reporting to the authorities a much smoother step.

What happens after GDPR authorities are alerted?

You continue the process of cleaning up, plugging the holes, and notifying the consumers affected by the breach. So far, the GDPR authorities have only specified that consumer notification happens “without reasonable delay.”

A data breach is always fraught with uncertainties, which is part of why companies typically take a long time to sort through the details and make public statements. With GDPR, companies no longer have the luxury of time, so it’s important to remove as much uncertainty as possible from the situation, to gain clarity quickly. An endpoint visibility tool can help speed up the process and provide confidence in a company’s findings after a breach.

There’s no way around it: the aftermath of a data breach with GDPR-regulated data will feel like a marathon. Having an endpoint visibility tool in place before the breach happens is like cutting that 300 miles down to a much more manageable 26.2 miles. It’s still a race you need to prepare for, but it’s a far more sane and feasible experience.

Code42 GDPR Compliance

Data Visibility Is the Key to GDPR Compliance

When we were young, most of us held the belief that what we couldn’t see couldn’t hurt us. We huddled in bed with the covers over our heads so we couldn’t see the monsters in the darkness, and somehow limiting our vision this way helped us feel safe.

As adults, we understand that ignorance isn’t protection, and being unaware of what’s out there doesn’t keep us safer. And yet, too many IT organizations can’t tell you what data lives on their employee’s devices. “Well, that doesn’t matter,” some IT leaders will say. “All of the valuable data in our company is on the network.”

Not true.

Code42’s CTRL-Z study showed that over 60 percent of corporate data is stored on user endpoints. With the enactment of the General Data Protection Regulation (GDPR) drawing closer every day, turning a blind eye to the data on your employee endpoints could have disastrous results. To protect company assets and meet GDPR compliance standards, organizations need to have a firm understanding of where personal customer data is stored and how it moves through their system. In other words, IT teams need to be able to see where all of their data is created, stored and shared.

GDPR is concerned with the movement of customer personal data, which is broadly defined by the regulation. It’s true that your average employee may not have customer social security numbers on their laptop, but personal information can be anything that might identify an individual, down to phone call metadata. If there’s a one percent chance a piece of data could identify a customer, GDPR requires you to treat it as carefully as you would a credit card number. And like it or not, this type of data does leave your corporate firewalls. Employees take their work home with them all the time; think about the sales rep who brings home background info on a customer to prepare for a big sales pitch.

Your leadership team does this as well. In fact, according to the CTRL-Z report, C-suite executives are the most likely to violate company data security policies. These policies are crucial, but they can’t overcome human nature. You need a data visibility tool to track data no matter where it moves, so if you do get breached, you can account for what information was impacted–and where and how.

Without that kind of data visibility, staying in compliance with GDPR will be a challenge. According to GDPR, companies only have 72 hours to report an incident once it is detected. But if you don’t know where your data lives, you have no way to gauge the impact of a breach. In the event that data is compromised, knowing exactly what has been exposed will make interactions with the regulatory agency much smoother.

It might be tempting to pull a blanket over your head, ignore the data that lives on employee endpoints, and hope for the best. That may have kept you safe from the monster under the bed, but it won’t keep you safe from potential fines for GDPR non-compliance: up to €20 million or four percent of annual revenue, whichever is greater. It’s time to recognize that data protection starts with data visibility.