Tips From the Trenches: Providing Value Through Business Continuity Code42 Blog

Tips From the Trenches: Providing Value Through Business Continuity

No matter what we do in our jobs, we all want to provide value back to the organizations where we work. With some jobs, tangible evidence of value is very apparent, such as hitting your sales quota or building code for a feature in your software. In business continuity, that can be a bit of a challenge. To start, most people don’t understand what it is, or what responsibilities are tied to it. If someone asks me what I do, and my response is: “business continuity,” the conversation usually goes a different direction shortly thereafter. This makes it a challenge from the get-go in showing value to your company.

“ If ensuring value to the company is at the center of your decisions, it will go a long way in leading to a successful business continuity program. ”

Here are a few key principles I have learned in my business continuity journey, that have helped me show value within my organization:

Leadership buy-in

Real simple, your business continuity program has to have this in order to succeed. If you think you’re fully prepared to respond and recover from a disaster without buy-in from leadership, you’re kidding yourself. Leadership needs to understand what you’re doing, why you’re doing it and how it will benefit their department and the company as a whole. This will give you top-level support and make your job easier. Having guidance from above will ensure your requests for resources for the purposes of a business impact analysis and recovery testing will be given.

No doubt getting leaderships attention can be a challenge, but it has to happen. I have been a part of organizations that didn’t have it, and the result was a program that could never meet its full potential because our requests for time and effort from other departments were never a priority.

At Code42, we worked with each member of our executive leadership team to outline what we were doing, why we’re doing it and what assistance we would need from their department. Department leaders were then able to give direction on who they wanted us to work with and set the whole program in motion.

Narrow the scope of your program

On the surface this seems counterintuitive. Why not cover every function and supporting activity? The reasoning is that most companies don’t have a dedicated team of employees focused on business continuity. For some, business continuity is simply one of many responsibilities they hold. Along with manpower, the further you head into supporting functions and away from what’s really critical, the lower the rate of return for the company. The key is to focus on what’s critical. I have experienced it firsthand, where my drive to make sure all business functions were documented and prepared for. It had me spending countless hours covering the full spectrum of the business. By the time I was finished, the data was already out of date and amounted to poor use of resources with little to no value for the company.

When we worked with each member of the executive leadership team at Code42, we kept our scope to the top two critical functions that each department performs. This helped our program avoid the minutiae and focus squarely on what’s critical for supporting our product, our customers and our employees.

Make the information accessible

The information for your business continuity program should not be sequestered away from your employees, it should be easy to view and update. This is a rather obvious statement, but one that I have seen many companies struggle with. Here at Code42, we made a misstep by thinking the solution to our business continuity challenges lie within a continuity software provider. The intent was for it to help us manage all of our data, produce plans and be a one-stop shop for all things business continuity. Not long after onboarding, challenges started to emerge. The biggest challenge, was the information was not accessible to the workforce. The other was that it didn’t tie in to any software already in use at Code42. It was on an island, and of little to no value to the business. A pivot was needed, and thankfully we didn’t have to go far for an answer.

The answer came from taking a step back and determining what tools employees use across the company on a day-to-day basis. For us, the answer laid within Confluence, which serves as our internal wiki. This is where we build out department focused pages and their respective critical functions, and dependencies. Connecting to Confluence allowed us to tie in another company-wide application, JIRA, for tickets related to vendor assessments and risk and incident tickets. Our focus throughout the process was to ensure value was being passed on to Code42 and its employees, and the key piece to that was having information easily accessible.

Business continuity has a number of inherent challenges, but if ensuring value to the company is at the center of your decisions it will go a long way in leading to a successful program. I hope these principles I laid out help you provide better value to your own company.

Connect with Loren Sadlack on LinkedIn.

Code42 Talks DLP with Dark Reading

After unveiling our Next-Gen Data Loss Protection solution at the RSA Conference 2019 in San Francisco, just about every visitor to the Code42 booth asked: How is data loss protection different than data loss prevention?

To answer this question, I sat down with Dark Reading’s Terry Sweeney for a video interview. You’ll find the highlights of our conversation in a short video below — and you can watch the full interview at Dark Reading.

The home security analogy

I like to start with a simple analogy everyone can identify with: Let’s say a would-be burglar comes to your door while you’re at work. In theory, you can rest assured that the person will not break into your house — because you have locks on your doors, right? But we all know locks aren’t failsafe, so what if this individual does find a way in? You won’t know about any of this until you get home — hours later — or until you realize something is missing, perhaps days later. By then, it’s much harder to figure out what all was taken, who took it and when it was taken. That’s the problem with the traditional data loss prevention model: it’s focused on prevention — but if that fails, you’re not left with much.

Now, imagine you have Nest cams inside and outside your house. Your front-door Nest cam notifies you immediately, via smartphone, to activity at your front door. With real-time visibility, if you don’t recognize the face of the visitor and/or are concerned with the actions he takes next (e.g., picking the lock, breaking a window, etc.), you can take action right now. Even if you discover something missing later in the day, you have video logs that will help you figure out when that article was taken and how. Just like the Nest cams, Code42 Next-Gen Data Loss Protection shows you exactly what’s happening, when it’s happening — so you can decide if it’s important and take action now.

Paradigm shift: all data matters

Another major difference in approach between legacy data loss prevention and Code42 Next-Gen Data Loss Protection: how the tools define the value of data. Traditional DLP tools require an organization to decide which data and files are valuable or sensitive — and then figure out how to configure it with rules and policies. But today’s knowledge workers are constantly creating data — and it all matters. From developing new software, to innovating manufacturing processes or providing consulting services, more and more businesses across every sector are ultimately in the business of making new ideas. For these “progressive makers,” as we call them at Code42, every file and every piece of data holds value in the chain of idea creation. And the value of any given piece of data can skyrocket in an instant — when a project turns from theoretical tinkering into tangible innovation. Finally, while traditional forms of protected data like PCI, PII, HIPAA tend to follow predictable formats and patterns that can be recognized through rules, all of this “idea data” is wrapped up in largely unstructured data. The data relating to a software product launch, for example, might span from source code files, to Word documents containing marketing plans, to Excel spreadsheets with revenue forecasts and production budgets, to CRM data on target prospects. There’s no way to create a blanket “rule” for defining the structure or pattern of data relating to a valuable product launch. 

“ In this new reality of endpoints and cloud where all data matters, Code42 offers an unmatched core capability: We’ve gotten really good at collecting and saving every file, from every user, on every device. ”

In this new reality of endpoints and cloud where all data matters, Code42 offers an unmatched core capability: We’ve gotten really good at collecting and saving every file, from every user, on every device. More importantly, we’ve gotten really good at doing it in near-real time, doing it cost-effectively and doing it without inhibiting users as they’re working. This means organizations no longer have to define, at the outset, what data matters. And this complete data collection unlocks the kind of immediate, comprehensive visibility that creates the foundation of data loss protection — and sets it apart from data loss prevention.

Two critical questions DLP buyers need to ask

One of my favorite questions from Terry Sweeney was, “What should a DLP buyer look for as they’re evaluating a solution?” My answer is simple:

  1. How soon does the tool show you that something is going wrong?
  2. How soon does the tool let you take action?

The most consistent and concerning finding from annual infosecurity reports like Verizon’s Data Breach Investigation Report and the Ponemon Institute’s Cost of Data Breach Study is that most organizations aren’t discovering incidents for weeks — or months. In fact, the Ponemon Institute’s 2018 research showed the average breach took 197 days for an organization to discover. That’s six months before the investigation even begins— and even longer until the organization can attempt to take some remedial action. That’s a lot of time for data to be lost, tracks to get covered and stolen IP to do damage to a business.

Code42 Next-Gen Data Loss Protection cuts that time-to-awareness from months to minutes. Take the common example of a departing employee: You’ll know if they’ve taken data before they even leave the building — not months later when a rival launches a competing product. Moreover, you’re getting immediate and full visibility around the context of the departing employee’s data removal — you can look at the exact file(s) and see if it’s valuable and/or sensitive — so you can make decisions and take action quickly and confidently.

Enabling infosec automation

My discussion with Terry ended with a look at perhaps the most important factor driving infosecurity forward: the expanding role of automation in helping organizations manage and protect ever-increasing volumes of data. Many organizations fight expanding data security threats with a small handful of infosecurity staff — half who are “on loan” from IT. Automation and orchestration platforms pull together and make sense of all the alerts, reports and other data from various infosecurity tools — fighting false positives and alert fatigue, and allowing them to see more and do more, with fewer human eyes. But these platforms are only as good as the inputs they’re fed. These platforms rely on comprehensive data feeds to ensure you can create the customized reports and alerts you need to reliably bolster your security automation. The complete security insights gathered by Code42 Next-Gen Data Loss Protection ensure there are no blind spots in that strategyThat’s why we’re focused on making sure all our tools plug into automation and orchestration platforms, and support the workflow automation capabilities you already have in place. All Code42 tools are available through APIs. If you want us to integrate data and alerts to be automatically provisioned in your SIEM or orchestration tool, we can do that. If you want us to automatically raise an email alert to your ticketing system, we can do that, too. Furthermore, Code42’s Next-Gen DLP allows you to take a more proactive “data-hunting” approach to data security, much like you would with threat hunting to deal with external malware and attacks.

This is where the value of Code42 Next-Gen Data Loss Protection gets really exciting. Our tool gives you incredible off-the-shelf value; it does things no other tool can. We’re seeing organizations integrating our tool with advanced automation and orchestration platforms — using our tool in ways we hadn’t even considered — and really amplifying the value and driving up their return on investment.

Watch the video highlights of the Dark Reading interview here or you can watch the full interview at Dark Reading.

Successful Software Security Training Lessons Learned Code42 Blog

Successful Software Security Training Lessons Learned

How enterprises build, deploy and manage software apps and applications has been turned upside down in recent years. So too have long-held notions of who is a developer. Today, virtually anyone in an organization can become a developer—from traditional developers to quality assurance personnel to site reliability engineers. 

Moreover, this trend includes an increasing number of traditional business workers, thanks to new low-code and so-called no-code development platforms. These platforms are making it possible for these non-traditional developers, sometimes called citizen developers, to build more apps the enterprise needs. Of course, whenever an organization has someone new developing code, it creates a situation that could potentially create new security, privacy and regulatory compliance risks. 

“ Recently, at Code42, we trained our entire team, including anyone who works with customer data, to ensure everyone was using best practices to secure our production code and environments. ”

For most organizations, this means they must reconsider how they conduct security awareness and application security testing. Recently, at Code42, we trained our entire team, including anyone who works with customer data. This comprised of the research and development team, quality assurance, cloud operations, side reliability engineers, product developers and others to ensure everyone was using best practices to secure our production code and environments. 

We knew we needed to be innovative with this training. We couldn’t take everyone and put them in a formal classroom environment for 40 hours. This isn’t the best format for many technologists to learn.

Instead, we selected a capture the flag (CTF) event. We organized into teams who would be presented with a number of puzzles designed to demonstrate common vulnerability mistakes, such as those in the OWASP Top 10. We wanted to create an engaging hands-on event where everyone could learn new concepts around authentication, encryption management and other practices. 

We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. Watching the teams work through the CTF was fascinating because you could see their decision-making processes when it came to remediating the issues presented. For problems where a team wasn’t sure of the solution, we provided support training materials, including videos.

“ We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. ”

While the event was a success overall, we certainly learned quite a bit that will create a better experience for everyone in our next training. 

Let me say, the CTF style was exceptional. The approach enabled individuals to choose areas they needed to learn, and the instructional videos were well received by those who used them. But I’ll tell you, not everyone was happy. About three-quarters of my team loved it, and then the other quarter wanted to grab pitchforks and spears and chase me down.

First, throughout the contest, a lack of a common development language proved to be a challenge. Within most of the teams, the engineers chose the problems that were in a language with which they were familiar. It often cut the quality assurance or site reliability engineers out from helping on that problem. No good.

Gamification, while well intended, caused problems.  As I mentioned, we had instructional videos. That way, if a team didn’t know the answer to a problem, they could watch the videos, get guidance, and learn in the process. This added a time element to the project, which actually caused individuals to skip the videos.

How we implemented the leaderboards proved counterproductive. Remember how we all (well, many of us) feared being the last person picked in gym class growing up?  Well, leaderboards shouldn’t be present until the game ends, and even then to summarize only the top finishers. No one likes to know they were in the bottom 10 percent, and it doesn’t help the learning process.

Dispel the fear. These are training and awareness classes. While official and credited security training is often a pass/fail outcome, this awareness training is for education. However, our employees feared their performance would somehow be viewed as bad and could affect their performance reviews—or employment. Face these rumors up front and make it clear the CTF results aren’t tied to work performance.  

Overall, our team did learn valuable lessons using our CTF format — the innovative approach we took to educate them was successful in that way. But next time I hold a contest, we will definitely incorporate changes from the lessons above. And I’ll work harder to strike the balance between formal lecture and class setting versus competitive event when there are developers that present with varying experience and skillsets.

  

Security Pitfalls of Shared Public Links Code42 Blog

Security Pitfalls of Shared Public Links

Imagine terabytes of corporate data exposed in the wild by employees sharing publicly available links on the cloud. Sound far fetched? It’s not. According to a recent article from SiliconANGLE, that’s exactly what happened when security researchers uncovered terabytes of data from over 90 companies exposed by employees sharing publicly available links to Box Inc.’s cloud storage platform. And while it’s easy to think that this problem is restricted to Box, it is in fact a problem most cloud services like Dropbox or OneDrive for Business need to address.

“ Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. ”

Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. This presents significant gaps in cloud security and compliance strategies and raises important questions such as:

  • What data is going to an employee’s personal cloud?
  • Who’s making a link public instead of sharing it with specific people?
  • Are departments or teams using other/non-sanctioned clouds to get their work done?
  • Are contractors getting more visibility than they should in these clouds?

Compounding the problem, the remedy that most cloud services provide to administrators is to “configure shared link default access” to users. Administrators can configure shared link access so accidental or malicious links can’t be created in the first place, however, there is a clear loss of productivity when users who need the continued collaboration and ability to share are mistakenly denied. This is where IT/security teams need to strike the fine balance between protecting corporate IP and enabling user productivity.

Code42’s approach to DLP doesn’t block users or shut down sharing, giving organizations visibility while there is a free flow of information between partners, customers and users in general. While understanding that a link has gone public in the first place, security protocols should further include:

  • Identifying files that are going to personal clouds
  • Understanding who’s sharing links publicly and why
  • Mitigating instances of non-sanctioned clouds
  • Gaining visibility into cloud privileges extended to contractors or other third parties
Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance Code42 Blog

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance

“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?

After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.

“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”

To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:

  • Compliance with security policies
  • AWS tagging requirements
  • Identifying unused resources for removal/review
  • Off-hours are enforced to maximize cost reduction
  • Encryption needs are enforced
  • AWS Security Groups are not over permissive
  • And many more…

After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.

How Code42 uses Cloud Custodian

Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.

As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.

As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.

“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”

Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.

Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more.  The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.

What policies did we implement?

There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.

Cost Savings Policy – ec2-off-hours-enforcement

EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.

Hygiene Policy – ec2-tag-enforcement

AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.

  1. The policy applies filters to look for all EC2 resources missing the required tags.
  2. When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
  3. The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.

If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.

This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.

Security Policy – S3-delete-unencrypted-on-creation

At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.

This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket.  It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.

Just starting out?

Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.

Resources:

Authors:

Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.

Byron Enos Code42

Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.

Code42 Jim Razmus

Jim Razmus II is director of cloud architecture at Code42. He tames complexity, seeks simplicity and designs elegantly. Connect with Jim Razmus II on LinkedIn.

Tips From the Trenches: Automating Change Management for DevOps

One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.

“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”

Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:

  • PREVENT issues (collusion, duplicate changes)
  • CONFIRM changes are authorized changes
  • DETECT issues (customer support, incident investigation)
  • COMPLY with regulatory requirements

Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.

Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.

We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.

This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.

New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.

“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”

So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.

But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.

We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.

Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.

Connect with Michelle Killian on LinkedIn.

Finally, a DLP for Macs

Finally, a DLP for Macs

It’s time to face the facts, Macs are everywhere in the enterprise. In fact, a 2018 survey from Jamf pointed to the fact that more than half of enterprise organizations (52%) offer their employees a choice in their device of preference. Not entirely surprising, 72% of employees choose Mac. The Apple wave within business environments has begun and only promises to grow over time.

“ Legacy Data Loss Prevention (DLP) solutions don’t account for the Mac phenomenon and were not designed with them in mind. ”

The problem is that legacy Data Loss Prevention (DLP) solutions don’t account for the Mac phenomenon and were not designed with them in mind. As a result, legacy DLPs often approach Macs as an afterthought rather than a core strategy. Customer opinions of their DLP for Macs continue to be unfavorable. In fact, last year at Jamf’s JNUC event in Minneapolis, Mac users quickly revealed their sheer frustration with DLP and how it wasn’t built for Macs. Code42 customers currently using legacy DLP vendors vented about their Mac DLP experience saying, “It just sucks!”

Naturally, we asked why.

  1. No Support – Mac updates can be fast and furious. Unfortunately, DLP has traditionally struggled to keep up with those updates. The result? Errors, Kernel panics and increased risk for data loss.
  2. No OS Consistency – We often forget that today’s businesses often use both Mac and Windows. DLP has traditionally maintained a very Windows-centric approach that has made the Mac experience secondary and inconsistent with Windows. Having two sets of users with varying levels of data risk is never good.
  3. It’s Slow – The number one issue often stems from performance-sucking agents that bring the productivity of Mac users to a screeching halt.
  4. Kernel Panics – This is worth reiterating. Macs are sensitive to anything that poses a threat, so whenever perceived unsanctioned DLP software threatens Mac, it means reboots and an increased risk of downtime.
  5. It’s Complicated – Traditional DLP still relies on legacy hardware and manual updates, which is time consuming and expensive.

Recently, Code42 unveiled its Next-Gen Data Loss Protection Solution at the RSA Conference 2019. One of the reasons our 50,000+ customers love us is precisely because of the superior Mac experience we deliver. Our next-gen DLP solution was built with the Mac user in mind. Learn more about our trusted and proven take on DLP for Mac.

Code42 Product Spotlight: Identify Risk to Data Using Advanced Exfiltration Detection

Product Spotlight: Identify Risk to Data Using Advanced Exfiltration Detection

When it comes to data loss protection, there are fundamental security questions that every organization needs to answer. These include, “Who has access to what files?” and “When and how are those files leaving my organization?”

Code42 Next-Gen Data Loss Protection helps you get answers to these questions in seconds by monitoring and investigating file activity across endpoints and cloud services. And now, Code42 has expanded its investigation capabilities to provide greater visibility into removable media, personal cloud and web browser usage by allowing security analysts to search file activity such as:

  • Files synced to personal cloud services. Code42 monitors files that exist in a folder used for syncing with cloud services, including iCloud, Box, Dropbox, Google Drive and Microsoft OneDrive.
  • Use of removable media. Code42 monitors file activity on external devices, such as an external drive or memory card.
  • Files read by browsers and apps. Code42 monitors files opened in an app that is commonly used for uploading files, such as a web browser, Slack, FTP client or curl.

Advanced Exfiltration Detection can be applied to proactively monitor risky user activity — such as the use of USBs across an organization — as well as to eliminate blind spots during security investigations. For example, imagine you’ve just learned that a confidential roadmap presentation was accidentally sent to the wrong email distribution list. Sure, it can later be deleted from the email server. But did anyone download it? Has anyone shared it? By using Code42 to perform a quick search of the file name, you can answer those questions in seconds. You’ll not only see which users have downloaded the attachment, but also that one has since saved the file to a personal Dropbox account. With this information in hand, you can quickly take action against this risky data exposure.

See Advanced Exfiltration Detection in action.


Rob-Securing-Data-in-Cloud-Chaos-Code42-Blog

Securing Data in Cloud Chaos

To succeed, every enterprise depends on data and the insights that can be gleaned from that data. Enterprises today are creating much more data than in prior years—much of it critical to their digital transformation efforts. And how this data is stored within enterprises has changed dramatically, which is having a profound impact on how that data must be secured.

How so? At one time, most enterprise data resided within enterprise databases and applications, and these applications remained (relatively) safely on enterprise endpoints or tucked back in the data center. Not anymore.

“ Gartner estimates that 80 percent of all corporate data today is actually stored unstructured. ”

That was the age of structured data. Today, data is more likely to be stored unstructured and reside in the form of word-processing files, spreadsheets, presentations, PDFs and many other common formats. The research firm Gartner estimates that 80 percent of all corporate data today is actually stored unstructured.

This means today our enterprise data is scattered everywhere. And just because it’s not structured within an application doesn’t mean the data isn’t critical – unstructured data today includes financial information, trade secrets, marketing plans and work with contractors and business partners. Not all of this data is the same nor is it managed in the same way — yet this data must be protected.

How we share unstructured data is also changing. No longer is data sent merely as email attachments. Today, data is shared through social media programs, cloud apps and communication platforms, such as Slack. In many organizations, staff are sharing sensitive data, such as consumer information, intellectual property, prospect lists, financial data and the like. Security teams need to be alerted when sensitive information is shared.

These trends should cause pause within anyone concerned about securing their enterprise information.

“ One of the most important steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. ”

According to our 2018 Data Exposure Report, 73 percent of security and IT leaders believe there is data in their company that only exists on endpoints and 80 percent of CISOs agree that they can’t protect what they can’t see. Seventy-four percent believe IT and security teams should have full visibility over corporate data.

Unfortunately, without a dedicated and continuous focus on securing unstructured data, such visibility won’t ever exist. Only chaos. 

Yes, most organizations take reasonable steps to protect their applications and databases from costly data breaches. They invest in endpoint technologies that protect their users’ endpoints from malware. They focus on database security, application security and related efforts. And they try to control access to their local enterprise network. But the challenging reality remains: even if an organization executed perfectly on such a security architecture, it would still leave itself open to a vast amount of data theft and exploitation. The reason is that organizations are ignoring roughly 80 percent of their data that exists unstructured. 

Legacy security methods haven’t kept pace

It’s critical enterprises get the security of their unstructured data right. Securing unstructured data is different than securing data stored within applications and databases. 

One of the most important, and likely first, steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. Other capabilities they’ll need in place will include monitoring who has access to that data, the ability to index file content across storage devices, cloud storage, and cloud services, and monitor that data for potential data loss, misuse and theft.

Having these capabilities in place will not only help organizations to better secure that data and identify careless handling of data or even malicious insiders, but also improve the ability to conduct in-depth investigations and identify threats, preserve data for regulatory compliance demands and litigation situations, and rapidly recover lost or ransomed files.

The fact is that unstructured data is 80 percent of enterprise data today, and the places it’s being stored are expanding. It’s imperative you give it the appropriate level of focus. While you can’t put the unstructured data back in the centralized data center again, you can bring a structured approach to data security that will reign in the chaos and adequately protect your enterprise information.

Code42 Data Loss Protection: Redefining DLP

Data Loss Protection: Redefining DLP

Data is one of the most valuable currencies in existence today. Companies can thrive or die based on data, and attackers—from run-of-the-mill hackers, to cybercrime syndicates, to nation states—aggressively target data. It’s no wonder that an entire industry of tools and practices exists for the sole purpose of securing and protecting data. However, data loss and data breaches are still a constant concern.

Perhaps the model of data loss prevention—or DLP—itself is flawed? I recently had an opportunity to speak with Vijay Ramanathan, senior vice president of product management at Code42, about this issue and about the unique perspective Code42 has on solving the DLP problem.

“Fundamentally—at its core—even the notion of what DLP stands for is different for us,” opened Vijay. “You know DLP as ‘data loss prevention’. We approach it as ‘data loss protection’.”

“ Rather than focusing all of the attention on prevention, it’s important to acknowledge that there’s a high probability that incidents will still occur, and have the tools in place to detect when that happens. ”

That is clever and makes for good marketing but changing a word around is just semantics. I asked Vijay to explain what that means for customers, and why he—and Code42—believe that is a superior or more effective way to tackle this problem.

He explained, “We want to look at data and data loss more holistically rather than just putting prevention strategies in place.” He went on to compare the approach to the way we treat other things in life—like our homes. He pointed out that people have locks on doors to prevent unauthorized access, but that many also augment them with alarm systems, and surveillance cameras and home insurance to create a well-rounded home security strategy. Data security should be no different. 

Traditional DLP is fundamentally flawed

Vijay described why the traditional approach to DLP is broken.

The standard model of DLP requires organizations to define which data is sensitive or confidential, and which data is trivial or meaningless. There has to be an initial effort to catalog and assign classifications to all existing data, and an ongoing process for users to assign classification tags to data as new data is created. 

If you only have a few people, or a relatively small amount of data, this approach may be feasible. But, for most organizations, it is challenging—bordering on impossible—to effectively implement data labeling policies, or maintain accurate asset tagging at scale.

The second issue Vijay mentioned was that DLP often creates new issues. He told me that data classification and data handling policies are designed to prevent bad things from happening, but implementing additional policies is like protecting your home by building a taller fence. It only goes so far as a means of data protection, and it forces bad behavior by users. Employees who just want to get their jobs done will often subvert or circumvent the system, or intentionally mis-classify data to avoid draconian policies.

“ Time to awareness or time to response is the most critical issue in cybersecurity today. The lag time before a company discovers a data loss incident is crucial. ”

Protection rather than prevention

So, what does Code42 do differently, and how does that translate to better data security? I asked Vijay to explain how the Code42 approach of data loss protection addresses these issues. 

“The whole approach of traditional DLP solutions seems highly problematic,” proclaimed Vijay. “Why don’t we just assume that all of the data is important? What’s important then is to make sure you understand what is happening with your data, so you can make reasonable, informed judgments about whether that access or activity makes sense or not.”

More locks and taller fences might work to some extent, but they will never be impervious. Rather than focusing all of the attention on prevention, it’s important to acknowledge that there’s a high probability that incidents will still occur, and have the tools in place to detect when that happens. 

Vijay stressed the importance of response time—and how quickly an organization can know what is happening. “Time to awareness or time to response is the most critical issue in cybersecurity today. The lag time before a company discovers a data loss incident is crucial.”

He explained that Code42 adopted a next generation DLP philosophy with no policies and no blocking. Code42 assumes that all data is important and provides customers with the visibility to know who is accessing it and where it is going, and the ability to monitor and alert without preventing users from doing legitimate work or hindering productivity in any way.

With this philosophy in mind, the company recently introduced its Code42 Next-Gen Data Loss Protection solution. It monitors and logs all activity. Within minutes of an event, Code42 can let you know that a file was edited or saved. Within 15 minutes, the file itself is captured and stored in the cloud. Customers can store every version of every file for as long as they choose to retain the data. Code42 also provides an industry-best search capability that allows all data from the previous 90 days to be quickly and easily searched at any time. 

Vijay shared that he believes the Code42 Next-Gen Data Loss Protection approach to data security —is a better and more effective way to address this problem. Taking blocking and policies out of the equation makes it easier to administer and allows users to focus on being productive and efficient. The DVR-like ability to review activity from the past and establish focus on suspicious activity day-in-day-out provides customers with the peace of mind they need that their data is safe and sound.