Microsoft and Code42 Ignite the Focus on Insider Threat

The entire Code42 team had a great time attending Microsoft Ignite in Orlando. Microsoft Ignite brings together more than 25,000 attendees who have keen interests in software development, security, architecture and IT. I have to tell you, before going to Ignite, I held preconceived notions that attendees would hold a clear bias toward IT challenges and not the broader challenges facing enterprise security.

Fortunately, I was mistaken, and it quickly became apparent that security and cloud concerns were a big part of the conversation. For all of us at Code42, that meant we were in store for an exciting week. We came to Ignite with a significant announcement – our new integration with Office 365 email.

More tools to mitigate insider threat

Why integrate Code42 with Office 365 email? There are a couple of reasons. First, while there’s been plenty of talk about the demise of email as the top communication platform, the reality is the amount of confidential and proprietary information sent via attachments every day in email is mind-boggling and enterprises need better controls. Second, while Office 365 email does provide ways to create email policies and flag risky emails, Code42 provides complementary insights and valuable investigative information into the who what, when and why (as I like to call it) around the files. This is just another way Code42 helps our customers to mitigate insider risks.

We also showcased some new Code42 capabilities that enhance the workflow for departing employee data exfiltration detection. As you may already know, managing the data exfiltration risks associated with departing employees has been a significant effort for Code42. When it comes to mitigating insider threats and data breaches, it turns out that departing employees are notorious for taking trade secrets, confidential information, and other types of intellectual property with them as they leave organizations for new companies.

The departing employee challenge is exacerbated by the following: first, most organizations don’t have a data exfiltration mitigation policy in place for departing employees; and second, there typically aren’t technology or applications available to assist in the departing employee workflow. This is precisely why Code42 developed and released its new departing employee workflow capabilities.

“ The departing employee challenge is exacerbated by the following: first, most organizations don’t have a data exfiltration mitigation policy in place for departing employees; and second, there typically aren’t technology or applications available to assist in the departing employee workflow. ”

Being able to showcase such powerful new capabilities and seeing the positive reactions from such a large crowd, was one of the most rewarding parts of Ignite for me. Of course, Code42 SVP Rob Juncker got us off to the ideal start with a session mainly dedicated to insider threat and the importance of having a well-defined off-boarding process to protect valuable IP when employees leave.

The new capabilities were a hit among attendees. But, more importantly, to me, the new departing employee capabilities were the catalyst for conversations into understanding current departing employee workflows. These conversations largely confirmed what we’ve been saying here at Code42: that typical departing employee workflows are either under-developed or non-existent. No wonder insider threat continues to be on the upswing!

While Ignite gathers an IT-centric audience, what we learned is that when it comes to insider threat, multiple departments are part of the conversation. It isn’t uncommon to expect IT, security, compliance as well as HR teams to be in the mix when figuring out the best course of action to manage insider threat.

Demos, doughnuts and a customer’s personal account

We were also fortunate to be joined by one of our customers, David Chiang, an IT system engineer at semiconductor provider MACOM. David presented on how MACOM relies on Code42 to detect, investigate and respond to insider threats and file exfiltration. He framed the departing employee threat perfectly when explained how, when a departing employee tells MACOM that they’re “just taking personal pictures,” MACOM can now (thanks to Code42) look back and validate if that’s so. “If we access the files and find that it was company property, the conversation changes,” he explained.

And under those circumstances, that conversation should change. The problem is that too many – actually, the vast majority of organizations – don’t have such process and technology in place to provide themselves that level of visibility. Hopefully, our data security and departing employee announcements, an excellent and in-depth story from one of our customers on their success (over some excellent mini donuts) resonated and will change some of the status quo for the better.

While Code42 went into Microsoft Ignite with an intent to learn and educate around regarding the insider threat, it turned out we weren’t alone. There were two other significant announcements that reinforced the importance of mitigating insider threats. The first of those was Proofpoint’s acquisition of ObserveIT. Why? Because ObserveIT has been in the insider threat space for quite some time, and this acquisition is clear validation that Proofpoint views insider threat as an integral expansion of their security portfolio moving forward. The second announcement was from Microsoft itself. Microsoft unveiled its Insider Risk Management tool within Office 365 that is designed to help identify and remediate threats coming from within an organization.

I’m happy to say that the many announcements, as well as attendee interest and conversation around the issue, give me hope that insider threat programs are about to take center stage when it comes to managing enterprise data risk. And next year, Microsoft Ignite 2020, is bound to dig even deeper into the insider threat and all of the associated risks. We can’t wait to be there.

Code42 Blog about macOS Catalina compatibility with legacy DLP

macOS Catalina Creates Kernel Crisis for Legacy DLP

Apple released the new macOS Catalina on October 7, setting IT and security teams abuzz about the logistics of upgrading their users, excitement about new features and concerns about the pains that always come with change. But security experts have revealed a troubling impact: macOS Catalina entirely disallows kernel extensions (kexts). This isn’t just another instance of “kernel panic” — this is a full-blown kernel crisis: Legacy DLP products will cease to work in the Mac environment going forward.

“ Legacy DLP products will cease to work in the Mac environment going forward. ”

Catalina goes read-only — disallows kexts

With the release of Catalina, Apple shifts the entire macOS to read-only, regardless of permissions. Kernel extensions are completely disabled. This change strengthens the overall security stance of the macOS. But it’s a major problem for legacy DLP products like Symantec and McAfee, which depend on kernel extensions for their core functionality.

Legacy DLP simply won’t work in Catalina

Disallowing kernel extensions disables the blocking functionality of legacy DLP products. The products will technically still “run” on Catalina (with the usual kernel panics and other pains), but they’ll no longer be able to work the way they have — no more blocking risky user actions. In effect, legacy DLP will cease to work altogether. At a time when insider threat continues to escalate, companies simply can’t afford to risk leaving their data exposed.

You can’t afford not to upgrade

Most legacy DLP vendors are approaching the kernel crisis carefully. They’re reaching out to customers with one-to-one communications, trying to convince them not to upgrade to Catalina so they can retain the functionality of their DLP products (for example, reference the table on Symantec’s support page). But not upgrading is not viable in the long-term. You need to give your users access to the latest features of Catalina; moreover, your users will demand the upgrade. And your security team can’t afford the security risks of lagging behind.

Code42 Blog about macOS Catalina not working with legacy DLP
Current recommendation found on the Symantec support page. The latest Catalina release makes the security gap evident for legacy DLP customers.

There’s not a ton of time to waste, either. Apple will end updates, security patches and support of macOS Mojave in less than 24 months. That means most organizations need to begin planning their upgrades—including how they’ll fill the enormous security gap — now.

DLP for Macs has always been painful

Running legacy DLP on macOS has always been frustrating—a “square-peg-round-hole” problem that creates more work for security teams and increases the potential for dangerous gaps in visibility and protection. But the clear trend is that Apple is making it even harder for DLP to function in macOS — leading to more kernel panics, frustrations and potential security gaps. So the “kernel crisis” of the Catalina upgrade isn’t coming out of nowhere. The reality is that legacy DLP was not built with Macs in mind, and this disconnect is coming to an urgent head.

Code42 is next-gen data loss protection built for Macs

At Code42, we know the pains of legacy DLP for Macs firsthand — and built our Code42® Next-Gen Data Loss Protection solution to mesh seamlessly with macOS. We understand macOS better, so we approach things differently by:

  • Working at the file-system level to focus on what really matters — your file data         
  • Monitoring the applications that access, interact with and touch those files
  • Giving you deeper, broader visibility into all file activity — across your endpoints, in the cloud and in applications

We don’t have to muck around at the kernel level, playing the whack-a-mole game of activity-blocking. All of this means that the robust functionality of Code42 Next-Gen Data Loss Protection is completely unimpacted by the security improvements of the Catalina upgrade.

Providing the business-critical push to move to next-gen data loss protection

Most security pros already know the many pains of running legacy DLP products on Macs. So, the good news is that the Catalina kernel crisis will give many security teams the final push they need, providing a business-critical reason to move to a better data loss protection solution. In fact, several of the world’s leading tech companies anticipated the Catalina kernel crisis and have turned to Code42 Next-Gen Data Loss Protection: not just to fill the gap created by the Catalina upgrade — but to help them build a more forward-thinking, future-ready data loss protection strategy.

Code42 Evolutionary Awards 2019

2019 Evolutionary Award Winners Showcase Innovation in Data Loss Protection

With all the scary statistics out there about the growing data security threats in the enterprise world, it’s easy to lose sight of a more optimistic fact: Enterprise data security is getting better — and organizations everywhere are building smarter data loss protection programs. Each year, the Code42 Evolutionary Awards celebrate the smart, innovative and just-plain-cool ways that organizations are protecting their data. This year, we recognized 10 organizations for their extraordinary innovation in data loss protection. Let’s take a look at the 2019 Evolutionary Award winners:

Evolutionary Award: BAYADA Home Health Care

BAYADA Home Health Care won the namesake Evolutionary Award for completely evolving the way their company secures data, protects IP, and enables users. Their data security journey began with safeguarding training videos in the cloud for their mobile workforce, then expanded to protecting data from the threat of lost and stolen laptops. BAYADA’s current project is to ensure that their proprietary and regulated data is secured and monitored for loss and proper usage. “Protecting data is impossible if you don’t have comprehensive visibility into where your data is, and to accomplish this you need the right tools,” says Craig Petrosky, director of Desktop Equipment Services for BAYADA. “That’s why it was critical for us to implement a solution that provides near real-time detection and the ability to respond to cases of data loss, leakage, misuse, or potential exposure.”

Guardian Award: Cisco

Cisco won the Guardian Award for a security team that creatively and effectively fends off an array of threats —from ransomware to malicious insider actors — to protect its valuable data. Cisco has developed countless data protection workflows by using Splunk to develop actionable insights about how data may be infiltrated and exfiltrated from the organization. “In today’s data landscape, it is important to have a solid data collection agent, one that offers insight into where data is, where it’s moving, and where it’s been. A tool that can offer this is an invaluable tool for Insider Threat investigations” says Kevin Currie, investigator CSIRT of Cisco.

Rookie Award: Ironwood Pharmaceuticals 

Ironwood Pharmaceuticals won the Rookie Award for an organization that has successfully deployed a new software product within the past year. Deploying new software is never a small feat, Ironwood Pharmaceuticals did so with a de-merger on the horizon, knowing that they would soon have to split their deployment in two. “When our organization was going through the de-merger, we needed a simple and flexible solution to ensure our data is protected,” says Lian Barry, manager, end user support for Ironwood. “We found a solution that has provided constant assurance that our data is protected throughout this period of increased organizational change. 

Harmony Award: MacDonald-Miller 

MacDonald-Miller won the Harmony Award for striking a balance between data protection and empowering employees to be productive and collaborative in order to deliver results to the company’s bottom line. Two of MacDonald-Miller’s top security priorities are that users never experience downtime from data loss, and that valuable data is not leaving with departing employees. “Our data is our competitive advantage,” said Eddie Anderson, technical business analyst at MacDonald-Miller. “It’s critical for us to protect data from loss, leak and theft, while enabling our employees to collaborate and work at the speed of business.”

Evangelist Award: David Chiang, MACOM

David Chiang, IT system engineer of MACOM, won the Evangelist Award for an individual with expertise in data loss protection who sets industry best practices and actively shares them with peers. Chiang’s passion for software deployment and systems integration began with an intern project and has evolved into deep expertise on protecting data in the midst of a digital transformation. “Digital transformations are exciting, but they can put data at an elevated risk,” says Chiang. “It’s important for organizations to take steps to protect their most important asset — their data — during these times.”

Atlas Award: Proofpoint

Proofpoint won the Atlas Award, honoring an organization for deploying and protecting an expansive global workforce. As the Proofpoint organization grew quickly through M&A, business continuity and user productivity were top priorities set by the CIO. “With help from professional services, we were able to quickly go from nothing to a fully deployed data collection agent that can support our global workforce, ensuring we never experience data loss. We had a very successful deployment and it proved ROI within four months.” says Brock Chapin, systems admin for Proofpoint.  

Trailblazer Award: Schneider Electric 

Schneider Electric won the Trailblazer Award for improving a critical workflow or process for its organization. The company developed a custom app, used as part of their computer depot service, which collects and recovers data — in order to streamline, expedite and standardize the service. The results: time saved for technicians, reduced end-user downtime and improved user experiences. “As anyone in IT knows, positive user experience is critical to the effectiveness of any technical program. Our custom app not only provides that user experience, but it also lets them get back to work faster through decreased down time,” says Austin Joe, end point solutions senior engineer, enterprise IT of Schneider Electric. “We couldn’t be happier with the results.” 

We’re in this together

Join us in giving a virtual round of applause for these successful and innovative organizations. These examples not only represent major achievements for the organizations themselves, but the overall progress of the collective community of enterprise data security professionals. As your security team tackles emerging and evolving data loss challenges, don’t forget that you have a powerful resource in your Code42 peer network. From looking to examples like the customers highlighted here as inspiration or blueprints for your own initiatives, to consulting with other data security professionals to get answers, advice and guidance, we encourage you to leverage this valuable connection to some of the enterprise security world’s best minds and biggest thinkers. While the details differ, we face the same threats, manage the same challenges and share the same goals. We’re in this together.

We look forward to seeing how your data protection strategy continues to grow in the future. Nominations for extraordinary innovation in data protection for the 2020 Evolutionary Awards will be open soon.

Today’s Five Biggest Overlooked Data Security Trends

In the weeks following Black Hat USA 2019, I’ve done a little traveling from conference to conference – and, in between all that, met with a few customers. In those conversations, I’ve noticed that the key themes that emerged at this year’s Black Hat (all of which I’ve outlined below) have been holding strong throughout customer conversations. I believe these will be the trends we’ll continue to see throughout the last leg of this year, and well into 2020.

1: Complex Solutions

The first trend that stuck out is how complexity remains too high in cybersecurity. Many vendors continue to talk about how sophisticated their products are and how they can solve complex problems. In doing so, these tools become inherently very complex and unwieldy themselves. There’s a large and relevant inconsistency here: on one hand, the security industry, and really all enterprises, struggle with a serious shortage of skilled cybersecurity personnel. On the other hand, the complexity of the toolsets continues to rise. Something has to give.

Of course, these tools are aimed at people who are assumed to be masters of their trade, and who are able to make informed decisions as they examine data subtleties. Finding people with such talents continues to be one of the biggest challenges in the security industry, and without such staff, these tools end up being misused, or even unused.

2: Skills Gap

The second trend is how vendor complexity exacerbates the skills gap. As more organizations look to hire security staff who are less skilled and experienced with the hopes of quickly training these personnel, security vendors still need to provide the market with products that enable newcomers to be as effective as experienced security professionals.

If we want to get information security right in the next 10, 15 or 20 years, the industry must make products and tools that are easier for this next generation of security professionals to consume. Innovative technologies like machine learning and AI are indeed exciting, but they need to be coupled with easy and prescriptive solutions that new security professionals can start using right away without having to be experts first.

3: Communication is Key

The third trend: security vendors need to improve how they communicate their value. By walking the show floor at Black Hat and engaging with various security vendors, you’ll quickly realize that they don’t communicate their value propositions very clearly. It’s a real challenge to determine what many vendors actually do and make sense of whether or not these “solutions” actually solve specific challenges.

This is an area where the entire security industry can improve. The focus needs to be on how to better communicate the value of products and services, and how they provide better business outcomes. However, it’s not just security vendors that should be thinking about how they impact business outcome versus just tools and technologies; security engineers, architects, directors and CISOs must also do a better job of discussing business outcomes and how their investments will improve those outcomes.

4: Management Challenges

The fourth trend is that the challenges associated with managing data loss remain high. There is a considerable amount of continued frustration when it comes to managing data loss.

In fact, all of the leading data-loss prevention vendors still talk about how they use AI to help classify data and automatically create data-loss policies. However, none have crossed the threshold where they can help security teams that don’t have the wherewithal to undertake a monumental project lasting several months or years to classify all of their data so that they can begin to deploy DLP.

Related to this is how understaffed and stressed most security teams seem to be. At the conference, I met with growing enterprises that have staffing ratios so low that one internal person supports 100+ employees. That ratio is far too low, and it’s why it doesn’t matter how cool the technology is; if it doesn’t help security teams that are under constant stress, then it simply doesn’t matter.

“ Making data-loss protection seamless and able to be managed by security teams of any size is something that we think a lot about at Code42. We focus on solving real-world cases, such as dealing with data loss risk by departing employees and high-risk employees in ways that don’t require hundreds to thousands of staff work hours to get right. ”

5: Product Consolidation

The final trend is the continued high level of technological and product consolidation occurring within the security market. This has been going on for some years now, and it’s continuing to accelerate. Security vendors continue to expand to adjacent problem spaces with complementary solutions – be it a DLP vendor acquiring CASB products, or a next-gen firewall solution adding EDR and SOAR capabilities to their portfolio. Elevating the business value to customers is one of the biggest drivers to increase user adoption of these new products and technologies.

These are the trends I noticed while exploring the show floor, speaking with vendors about the issues they are trying to solve, as well as meeting with customers and prospects. While the challenges are steep, I’m convinced that the industry and security professionals alike are motivated to learn, adapt and improve in order to solve the intricate obstacles we face, such as insider threat. We should expect to see solid progress in these areas in the next year.

Zero Trust Starts with Data Security

Recently, I joined co-presenter Chase Cunningham from Forrester for a webinar titled, “Zero Trust starts with Data Security.” You can’t be in security and not have heard of Zero Trust. It’s become marketing fodder to a lot of folks, so our goal was to present a very real-world scenario of what was driving the Zero Trust movement. Recently, Code42 commissioned Forrester Consulting to evaluate challenges that organizations face using traditional data loss prevention solutions. They surveyed 200+ security budget decision makers in the U.S. at organizations with 1,000 to 4,999 employees.

Here is a summary of the key takeaways from the webinar: 

It’s war! 

Make no mistake, we are living in a warfighting domain in cyberspace. In fact, in 2010 the U.S. Department of Defense declared cyberspace a warfighting domain. Simply put, your business and its associated data is in the middle of a war zone.

Compliance is more than a checkbox!

You can be compliant or you can be secure. Often organizations that choose to just be compliant are still setting themselves up for major security breaches. The analogy Chase used to explain this idea in the webinar is reason enough to watch the replay.

DLP isn’t the second coming. Prevention isn’t enough.

There is plenty of market frustration about the current state of DLP. Users have essentially checked out and are recognizing that there is a critical protection layer missing from the security stack.

Insider threat is on the rise. 

Here’s a stat to ponder: Ninety percent of insider data loss, leak and theft goes undetected internally.

Departing employees are taking your data.

Fifty percent of the labor force is already looking for new employment, half of which have been with their current employer for less than a year. They are quitting at alarming rates, and they are taking your data when they go! 

Workflows don’t exist.

We asked a very simple question of today’s organizations: Do you have a departing employee workflow? While badge and device collection are standard HR protocols, we heard crickets when it came to “collecting the data.” Simply put, organizations do not have a process for protecting corporate data when employees leave. 

Data is no longer the core focus. Everything else is.

Solutions and training have shifted the focus away from the core problem of the “data” itself. Prevention-oriented solutions are so focused on policies, classification and blocking, etc., that they are ignoring data altogether, which is a critical element in the Zero Trust approach. 

Zero Trust is a timely reminder…

To focus on the data! 

All data matters

At the core of Zero Trust is an approach rooted in collecting all data, not culling it out. 

It’s about data loss protection 

You have to complement a prevention-focused approach with protection measures because ultimately it is imperative to reduce the time to detect, investigate and respond to a data breach. 

Follow the data, not the employee!

While it can be easy to get suckered into a “Big Brother” mindset of monitoring employee movement patterns, all you really need to do is understand data movement patterns. After all, it’s the data the employee is after! 

To dive into the details of this webinar some more, catch the entire on-demand version here.

Building a Security-Minded Organization

Tips from the Trenches: Building a Security-Minded Organization

As a security software company, it’s essential that everyone at Code42 thoroughly understands the security industry. This is true for nearly every position. Our sales teams need to fully understand the needs of our customers—and human resources need to understand security as they recruit candidates in the security industry, where it’s highly competitive to find the requisite talent. 

Marketing clearly needs to understand not only the big-picture security needs of our customers, but also the daily life and day-to-day challenges of a security analyst. Furthermore, as security becomes an integral component in DevSecOps, developers need to better understand application security, which means that security folks also need to up their code writing skills.

Of course, not everyone requires the deep depth-of-knowledge one would expect to find with a professional security team, but everyone who works at a security software company should understand security basics. With that goal in mind, we have created the new Security Ninja program designed to teach security and enable employees to earn new belts as their mastery progresses. These belts start with a white belt and culminate with a black belt, which requires a security certification to earn. These Code42 security ninjas will become our security ambassadors within the company.  

This self-driven program, which begins when an employee registers to earn a belt, can be completed per an employee’s individual schedule. Credits are allocated by time spent learning and consist of a mix of free training that can be found online, including through YouTube videos, attending a security lunch, and learning and sharing their learnings on our company’s Slack channel. When an employee does share his or her lessons learned on our internal Slack channels, it makes me smile because we now have employees who are teaching each other what they know about information security. 

For security awareness teams, watching employees gain more security knowledge that exceeds what is required for compliance, is literally a dream come true. These trainings are no cakewalk, mind you: The belts require the applicant to not be late on any of his/her security or privacy trainings, and the applicants must not have clicked on a link in a test phishing email. If they do, they can apply to continue their training in the following quarter. Since we implemented the Ninja program last January, we’ve seen our training completions rise and fewer links in phishing tests clicked. This is a huge win.

To keep engagement high, we’ve built the program to be competitive and also fun and lighthearted. We regularly communicate about the program on our company-wide Slack channel. Some managers have set goals for their teams to gain their belts and initiate a bit of friendly competition in the process. Our sales teams are thrilled to expand their security expertise to better understand our customers and prospects and to speak their language.

Here’s how applicants earn their belts: First, they must provide evidence of completion on the learning activities they chose, even if it’s just a screenshot. Once they’ve gained the required amount of training credits, applicants can then take an online exam in our Learning Management System (LMS). At the end of the quarter, the LMS list of successful exam completions becomes my starting list to check off evidence submitted by each applicant. I check evidence “audit style” by randomly selecting people to audit; the truth is, however, that I’m so thrilled at the work they are all doing that I tend to review all evidence submitted, especially the “lessons learned.” There is no greater sense of satisfaction for a security awareness professional. 

Each quarter, we celebrate all of the new ninjas and award them their “belt,” i.e., a colored badge with an outline of a ninja. The ninjas can attach the belt to their badge holder or lanyard to proudly display their ninja level status. Of course, we have fun with this, too, by inviting everyone to our main meeting area and provide donuts for their accomplishments. We call it “Donuts in the Dojo,” and our CISO is there to congratulate everyone on their newfound security expertise.

This is not only a win for the security team, it’s also a win for the employees. They can more confidently navigate the world of security professionals and better understand our customers. All of this means it’s a huge win for Code42.

Using Slack to Enhance Security Blog post

Tips From the Trenches: Using Slack to Enhance Security

Slack, the popular collaboration tool, got more than its share of media attention last month. All this Slack buzz gives us an opportunity to share how we use Slack here at Code42. We’ve thoroughly vetted Slack, and rather than banning it as a security risk, we actually use the tool to enhance our security capabilities.

Why Code42 uses Slack

So, what about those security concerns? Any tool that facilitates the sharing of information brings some risk of user abuse or error , such as oversharing, mis-sharing, etc. That’s true for Slack, just as it’s true for Google Docs, Dropbox — and even, yes, Microsoft Teams. Just like our approach to data loss protection, our internal security strategy takes an honest look at risk mitigation that focuses on the biggest risks — without unnecessarily impeding productivity, collaboration and innovation. Like all our third-party vendors, we hold Slack to our rigorous vendor security standard, which includes an annual vendor security risk reassessment process. Moreover, we’ve put security controls in place that balance the need to mitigate the inherent risks of information-sharing with the productivity and innovation value of the tool itself.

How we use Slack

At Code42, nearly every employee uses Slack every day for real-time direct messaging, increasing productivity and helping us deliver on one of our core company values: Get it Done, Do it Right. The Code42 security team, in particular, leverages Slack in unique and powerful ways.  Here are a couple ways we have integrated Slack functionality to improve our internal security program:

  1. Security alert notifications: Slack’s Incoming WebHooks allow you to connect applications and services to your Enterprise Slack. We use this capability to implement security notifications tied to activities in our security applications, which are then posted in a corresponding Slack channel. This provides our security analysts and partners across the business with real-time alerts right in the application where they are already communicating and collaborating throughout the day, helping them take appropriate and timely action.

    For instance, we have created private channels to alert on critical events within different environments, such as alerts from Capital One’s Cloud Custodian. The alerts are based on policy violations that we define in YAML policy files. Cloud Custodian then alerts our team — and takes action when needed. For example, if Cloud Custodian sees an S3 bucket configured as public, it will make it private by changing permissions in the access control lists (ACLs) and bucket policies — and then notify our teams of the change via Slack as depicted below.



    Screenshot of Slack’s Incoming WebHooks tool:


  2. Security news and updates: Our security team also created a public channel (open to everyone at Code42) as a collaborative workspace for all users. The public channel enables staff to crowdsource and share security knowledge, and to have discussions around the latest security news. Anyone can post security articles, whitepapers, podcasts, blogs or news — highlighting interesting ideas — and weighing in on each other’s responses. This channel acts as a security news feed, delivering just-in-time security-related information to employees to keep them aware of the latest security threats and trends. Code42 employees also often post what they are seeing in their own news feeds as they become more security savvy.

Walking the Talk

At Code42, we talk a lot about the fundamental paradox of enterprise information security: Information-sharing is both the key to success — and the biggest risk — in organizations. The smart approach focuses on controlling the risk, so you can unlock that value. We’ve vetted Slack and put security controls in place, so we can leverage its capabilities to fuel collaboration, enhance productivity and improve our internal security capabilities. Slack integrates with our security tools for real-time alerting and allows us to quickly disseminate security knowledge throughout the organization. Our internal use of Slack demonstrates how we walk the talk in our own approach to information security.

The Five Big Themes I’ll Be Looking for Next Week at Black Hat

If there was one annual event that encapsulates cybersecurity, it’s Black Hat. For more than 20 years, thousands have gathered to learn security during the Black Hat training sessions and see cutting-edge research on display at the Black Hat Briefings. Black Hat has been doing this every year in Las Vegas since 1997. That’s right about the time enterprise data security started maturing into widespread practice. Over the years, the crowds have grown, and so has the importance of data security. 

Every year at Black Hat, I try to keep an eye out for different trends. These are themes that I believe will be important and drive a lot of the conversation at the conference, not to mention the months that follow. Here’s what I’m looking at this year:

“ What piques my interest about insider threat isn’t just the number of attacks perpetrated by insiders; it’s about how damaging insiders can be to an organization. After all, insiders know where the data is and what data is valuable. ”

The insider threat

There have been several recent news stories that highlight insider threat and it’s no fluke that they dominate the news cycle. Insider threats are up 50 percent in the past four years alone. Recently, we learned about the McAfee employees who quit and were sued for allegedly taking intellectual property to a competitor. Then there was the SunPower exec who emailed himself highly sensitive trade secrets. And the Desjardin employee who accessed the data of nearly three million bank customers. Earlier this year, the Verizon Insider Threat Report found that 20 percent of cybersecurity incidents originated from trusted insiders and often went unnoticed for weeks, months, and even years. 

What piques my interest about insider threat isn’t just the number of attacks perpetrated by insiders; it’s about how damaging insiders can be to an organization. After all, insiders know where the data is and what data is valuable. I’ll be looking for lots of conversations in this area, and new insights into ways to better detect and respond to insider threats before IP is gone and the damage is done.

The increased importance of DevSecOps

The popularity of DevOps keeps growing. According to Allied Market Research, the global market for DevOps tools was nearly $3 billion in 2016 and is expected to reach over $9 billion by 2023 — growing at a healthy 19% annual clip. Yet, enterprises have a challenge when it comes to incorporating security into the DevOps application development and management processes. That’s what DevSecOps is all about. I think we’re going to hear some great advice and ways to maximize the incorporation of strong security practices into DevOps.

Insight into the emerging threat landscape

We always look toward finding a fresh perspective on the threat landscape at Black Hat. The conference presenters are always examining new attack methods in detail. This year will be no different, and I’m expecting to see interesting approaches to attacks via social media and insider threat exploits.

Latest trends in Zero Trust security

Zero Trust has moved from buzzword to reality, but we’re just beginning to see organizations move beyond superficial Zero Trust implementations. I expect the conversations around Zero Trust, a concept of security centered on the belief that companies shouldn’t trust anyone or anything inside or outside their perimeters, and instead must verify and monitor anything and everything trying to access company data, to become more meaningful and results-based. This will continue to be an interesting and compelling topic in the months following Black Hat.

A deep look inside a few interesting security vulnerabilities

At Black Hat, if you don’t make it to a few sessions where they dive deep into a security flaw or exploit, you’re really missing out. These sessions are eye-opening, heart-stopping, and mind-jarring to see. It opens your eyes to the ways in which people make new inroads to devices, hack into large enterprises, and leverage vulnerable software to do it silently.

I’m also going to keep a lookout for new buzzwords and emerging attack trends. For instance, we already see the rapid rise of deepfake movies. And let’s face it, these videos are getting incredibly good, thanks to sophisticated algorithms that create unprecedented reality. Soon, we’ll have issues trusting our own eyes and ears and their ability to discern what is real. This will be fun to see take shape this year.  

Finally, we all know that the IT industry is increasingly turning to artificial intelligence (AI) and machine learning to help secure our increasingly complex environments. But when it comes to new security technologies, it’s a bit of a double-edged sword. What can be used for our defense can also be used to attack us. AI is no different, and in the near future, we’re going to see AI used more commonly to attack enterprises. AI-based attacks are on their way. You can count on it.

Mitigating Departing Employee Data Loss Threats Code42 Blog

Mitigating Departing Employee Data Loss Threats


The first thing most IT security pros think when they read, “DLP is a program or a process — not a product,” is, “A program sounds a lot more complicated and expensive than a product.” But that doesn’t have to be the case. In my last blog, I outlined 10 key steps to building a simplified insider threat program that’s based around three key workflows: departing employees, organizational change and high-risk employees. We believe these three scenarios account for 80% of insider threat. 

Today, we’re diving into the first workflow: departing employees.

“ Most organizations don’t have a specific and consistent workflow to account for the unique data exposure risks surrounding a departing employee. ”

It’s a big problem, and it’s only getting bigger

Even the very best places to work are feeling the pain of this growing challenge. People are changing jobs more frequently than ever, a trend that started shortly after the recession and has continued accelerating: Employee “quits” (voluntary departures) have risen every year since 2010, according to the U.S. Bureau of Labor Statistics. A recent survey suggests more than half of U.S. workers will look for a new job in 2019 — and half of those new-job-seekers haven’t even been at their current gig for a full year. One big reason: employees increasingly don’t have the same feelings of loyalty toward their employers — in fact, they fully expect to switch jobs frequently in order to stay fresh and grow. With the job market remaining strong (especially for in-demand knowledge workers), their confidence in finding a new job is as high as ever.

And when they leave, they’re taking valuable and/or sensitive data with them. The Code42 2018 Data Exposure Report showed that roughly half of employees admit to taking IP with them when they leave. Even more concerning: The higher you go in the company, the more likely data is walking out the door with over 70% of execs admitting to taking IP from one employer to the next. 

It’s not black and white

The risk posed by departing employees tends to be viewed in absolute terms. Most organizations assume that 99.9% of employees would NEVER take anything or do anything risky. “They’re good people; they know better,” is something we hear all too often. On the flip side, most assume that any employee that does take data is doing so maliciously. The reality is that there’s a tremendous gray area. Most people aren’t outright stealing. They’re doing things like:

  • Pulling together their best work to help them land a new job
  • Taking the work they’re most proud of with them
  • Taking things like templates to use in their new gig
  • Taking “their” client info
  • Deleting files to “help” clean up their devices for the next user
  • Even just sharing work with colleagues, or pulling important working files onto thumb drive to give to a current colleague to ensure the project keeps moving forward after they leave

Most have good (if self-centered) intentions. But they’re still taking actions that put the company at risk.

Offboarding is just as important as onboarding

While most organizations dedicate significant time and resources to their employee onboarding program, offboarding gets far less attention. In fact, most organizations don’t have a specific and consistent workflow to account for the unique data exposure risks surrounding a departing employee much less involve the security team if they actually do have a process. 

Building a departing employee workflow

With employee departures accelerating across the workforce — you need to have a dedicated program to account for these risks. So, what should that program look like? Here are a handful of best practices that simplify the task:

  • Have a corporate policy. You may think your idea of data theft is universal. It’s not. Every organization needs an explicit, written policy around employee data exfiltration: what they can and can’t take; where they can and can’t move data; and how they should go about getting permission to take files or data upon their departure.
  • Publicize the policy. Bad habits are hard to break. Make data protection best practices part of employee onboarding. But also make sure data exfiltration review is part of the offboarding process. A simple reminder can go a long way toward preventing well-intentioned employees from doing something they shouldn’t.
  • Create a departing employee trigger — and execute the workflow every time. Most organizations have a new employee trigger, owned by HR, that automatically sets in motion an onboarding process that includes everything from training to IT and security teams giving the new employee the access privileges they’ll need. HR should also have a departing employee trigger that automatically sets in motion an offboarding process that includes a security analysis of the employee’s data activity to account for potential risks. Just like onboarding, this departing employee workflow should be followed for every departing employee — not just those you consider high-risk. 
  • Go back in time. A common mistake is to think employees start taking data after they give notice or right before they leave. Moreover, most employee monitoring tools only start monitoring an employee once notice is given. The reality is that the risky activity most often occurs much, much earlier — as they’re looking for a new job; after they’ve accepted another position, but before they’ve given notice; etc. To account for this reality, best practice is to analyze departing employee activity going back months from the day they give notice.
  • Build a “red flag” list with LOB. By focusing on just departing employees, you’ve already dramatically narrowed the scope of the security analysis from the traditional, “classify ALL your data” approach of legacy DLP. But you can hone in further by engaging LOB leaders to build a specific list of your organization’s most valuable files and file types: source code for tech companies, CAD drawings at an engineering firm, Salesforce files and customer lists, spreadsheets with financial info, codenames for R&D projects, etc. Make sure your monitoring tools allow you search and filter activity by file type, file name, etc., so you can quickly look for these red-flag activities.
  • Search for common signs of suspicious activity. In addition to looking at specific file categories, your monitoring tools should also allow you to easily see when file activity deviates from normal patterns (a spike, e.g.), to search specifically for after-hours or weekend activity (when suspicious activity often occurs), and to uncover suspicious file mismatches (i.e., a customer list file is renamed “photo of my daughter” and the MIME type doesn’t match the extension).

“ To get to the bottom of suspicious activity and act with confidence, you need the ability to restore and review any version of any file — so you can see if it’s really a problem. ”

A departing employee workflow example

Here’s a rough look at how a departing employee workflow…works:

1) TRIGGER
Employee gives notice, triggering activity review by IT security.

2) ANALYSIS
Security looks back at the past 90 days of employee data activity, searching for suspicious or risky actions.

3) ACTIVITY FLAGGED
Security flags suspicious activity: a product pricing spreadsheet that was emailed to an external address.

4) HR/LOB REVIEW
Security restores the spreadsheet and brings it to HR. HR brings it to the LOB manager. LOB manager confirms that emailing pricing document was not authorized.

5) ESCALATION TO LEGAL
Depending on the activity and severity of the risk, the issue may be escalated to legal.

It all depends on visibility

The departing employee workflow — like your entire insider threat program — depends on visibility. To be able to look back at the last 90 days of a departing employee’s activity, you can’t be working with a DLP or monitoring solution that only kicks on after the employee gives notice. You need to be continuously monitoring all data activity, so you’re instantly ready to execute a 90-day security analysis of any employee, as soon as they give notice. This visibility can’t be limited to file names. To get to the bottom of suspicious activity and act with confidence, you need the ability to restore and review any version of any file — so you can see if it’s really a problem. With this kind of always-on monitoring, you can enable the kinds of targeted triggers that focus your attention where it matters most — and act quickly to mitigate risk and potential damage from the many things departing employees take with them when they leave.

Happy Anniversary! GDPR One Year Later

Happy Anniversary! GDPR One Year Later

It’s been a year since we — and many of you — went live with enhancements to our privacy and security programs tied to GDPR, and two years since we started the GDPR journey. That’s why it’s a great time to look back at the impact GDPR has had on the way we do business.

This post is purely for general information purposes and is not intended as legal advice. This blog gives a glimpse into Code42’s early GDPR implementation. We, along with GDPR as well as other national and international privacy rules, will continue to evolve and mature.

“ The GDPR journey shouldn’t be a one-department initiative or the sole responsibility of Legal or Security. It must be a business-driven initiative with Legal and Security providing recommendations and guidance. ”

What we did to get ready for May 2018

We started preparing for GDPR around May 2017. The GDPR journey shouldn’t be a one-department initiative or the sole responsibility of Legal or Security. It must be a business-driven initiative with Legal and Security providing recommendations and guidance. At Code42, we established a cross-functional group comprised of Legal, Security, IT and system subject matter experts. The key activities of this group were to:

  1. Create an inventory of applications in scope for GDPR. We have European employees and customers so we had to look at applications that were both internal and customer-impacting. When outlining in-scope applications for GDPR, we kept in mind that more restrictive data privacy laws seem imminent in the U.S. We also conducted a cost-benefit analysis to determine whether we should keep non-EU PI in scope now or revisit it at a later date.  
  2. Define retention periods for all of the applications in scope. Prior to our GDPR journey, we had a retention program in place, but it was largely focused on data we knew we had legal, regulatory or other compliance obligations around, including financial records, personnel files, customer archives and security logs. GDPR just gave us the nudge we needed to mature what we were already committed to and have better conversations around what other data we were storing and why.
  3. Figure out how to purge personal data from applications. This may be challenging for SaaS organizations. When applications are managed on premise, it’s much easier to delete the data when you no longer need it. But translating that to all your SaaS applications is another story. There are a few areas where SaaS applications are still maturing compared to their on-prem counterparts, and data deletion appears to be one of them. Delete (or anonymize) data, where you can. Otherwise, either add the applications to a risk register, requesting that the application owner do a risk accept and submit a feature request to the vendor, or look for a new vendor who can meet your retention requirements.
  4. Create an audit program to validate compliance with our security program. We are fortunate to have an awesome internal audit program that monitors effectiveness of our security program, among other IT and technology-related audit tasks. So it was logical to test our in-scope applications against our newly defined retention requirements. We review applications periodically.
  5. And lastly, but just as important, define a process for data subjects to request that their information be deleted outside of a standard retention schedule (aka “right to be forgotten”). It is important to remember that this is not an absolute. While we want to honor a data subject’s request as much as possible, there may be legitimate business cases where you may need to maintain some data. The key for us was defining what those legitimate business cases were so we could be as transparent as possible if and when we received a request.

What we’ve learned in the last year

So what have we learned about GDPR one year and two internal audits later? A lot. 

What’s going well

1. A vendor playing nice

We had a really great success story early on with one vendor. When we dug into it, we found that our users were previously set up with the ability to use any email address (not just a Code42 email). We also learned our instance was configured to save PII that wasn’t a necessary business record. Based on that conversation, we were able to make a few configuration changes and actually take that application out of scope for GDPR! 

2. A more robust application lifecycle program and greater insight into the actual cost of a tool

As a technology company that is continually innovating, we want to empower our users to use tools and technologies that excite them and increase productivity. At the same time, we want to ensure we are addressing security, privacy and general business requirements. Users often find tools that are “so cheap” in terms of the cost of user licenses. Our new Application Lifecycle Management (ALM) process, however, gives us a better sense of the actual cost of a new tool when we factor in:

  • Onboarding requirements: Think Legal, Security, IT, Finance. Are there compliance requirements? Do we already have similar tools in place?
  • Audit requirements: Will this be part of the GDPR data retention audit, user access audit or other application audit?
  • Stand-up/stand-down requirements: Will it integrate with single sign-on solution? How does it integrate with other tools? How is data returned or destroyed?
  • Support requirements: Who are users going to contact when they inevitably need help using the tool?

When the person making the request can see all of the added costs going into this “inexpensive” tool, it makes for easier discussions. Sometimes we’ve moved forward with new tooling. Other times we’ve gone back to existing tools to see if there are features we can take advantage of because the true “cost” of a new solution isn’t worth it.

3. A great start toward the next evolution of privacy laws

On the heels of GDPR, there has been a lot of chatter about the introduction of more robust state privacy laws and potentially a federal privacy law. While future regulations will certainly have their own nuances, position yourselves to comply with them in a way that will require just small tweaks versus major lifts like the GDPR effort.

What’s not working

1. What exactly IS personal data?

We have had a lot of conversations about what data was in scope… and I mean A LOT. According to the GDPR, personal data is defined as any information related to an identified or identifiable natural person. That puts just about every piece of data in scope. And while it may seem like an all-or-nothing approach may be easier, consider risks that could affect things like availability, productivity, retention, etc. when implementing controls, then scope programs appropriately to address those risks in a meaningful way. 

2. “Yes, we are GDPR compliant!”

One thing we realized very quickly was that it wasn’t enough to simply ask our vendors if they were “GDPR compliant.” We ended up with a lot of “Yes!” answers that upon further investigation were definite “No’s.” Some lessons learned: 

  • Understand the specific requirements you have for vendors: Can they delete or anonymize data? Can they delete users? 
  • Whenever possible, schedule a call with your vendors to talk through your needs instead of filing tickets or emailing. We found it was much easier to get answers to our questions when we could talk with a technical representative.
  • Ask for a demo so they can show you how they’ll delete or anonymize data and/or users. 
  • Don’t rely on a contractual statement that data will be deleted at the end of a contract term. Many tools still aren’t able to actually do this. It’s important that you know what risks you are carrying with each vendor.
  • Audit your vendors to ensure they are doing what they said they would. 

Would we do it all over again?

Actually, yes. While our GDPR project caused some grumbling and frustration at the beginning, it has now become an integrated part of how we operate. There is no panic and no annoyance. Instead, there are lots of great proactive conversations about data. At the end of the day, we have matured our tool management, and our privacy and security; and our data owners feel a stronger sense of data ownership.

Wanna see a sample of our Application Lifecycle Management (ALM) vetting checklist?