Code42 provides your business with a variety of data security benefits, including increased productivity, risk mitigation, streamlined user workflows, and more–all in a single product that’s been proven to ultimately save you money. While Code42 has a few primary use cases–backup and recovery, device migration, etc.–we’ve learned that our different customers use Code42 in different ways. To explore how customers use our product, we recently partnered with the talented team at creative agency Crash+Sues to create a series of animated videos featuring the voices and likenesses of actual Code42 users.
In our latest video, Naazer Ashraf, senior computing consultant at Lehigh University, explains why they rely on Code42 over sync and share products for data backup and restore. As one of the nation’s premier research universities, Lehigh’s faculty are known for their excellence in research. Obviously, data is extremely important (and valuable) to researchers, so imagine the reaction when one researcher deleted files from Google Drive to save space–and discovered that doing so wiped the files for 10 other researchers. Naazer tells the story in just 42 seconds. Check it out below.
We recently partnered with the talented team at creative agency Crash+Sues to create a series of videos about the core features of Code42. This most recent video focuses on an all-too common scenario in which an employee decides to steal valuable data from his employer. Unfortunately for him, this company has Code42’s Security Center.
Take a look today for an illustration of how Code42 and Security Center can help keep your enterprise’s data safe from insider threats.
3 Steps to Mitigating Insider Threat Without Slowing Down Users
What does McKinsey call one of the largest unsolved issues in cybersecurity today? Insider threat. They noted that a staggering half of all breaches between 2012-2017 had an insider threat component. To make consequential strides in combatting insider threat, the topic must be explored further. Thanks to Verizon’s Threat Research Advisory Center, which produced the Verizon Insider Threat Report, we can take an in-depth look at the role insider threat plays in the broader cyber threat landscape.
The Verizon report draws on statistics from their Data Breach Incident Reports and lessons learned from hundreds of investigations conducted by their internal forensics teams. It highlights the ease with which insiders exfiltrate data, while detection on the other hand often takes far longer.
“ Insider threat should no longer be a taboo subject for internal security teams. Denial has not helped – it has only resulted in time-to-discovery being months-to-years for most inside breaches. ”
A trio of Code42’s leading experts on insider threat shared their reactions to the report. Read on to find out their most compelling takeaways.
Jadee Hanson, CISO and VP Information Systems for Code42 called out:
The top motivations for insider threats include financial gain (48%), which is not surprising. This is followed second by FUN (23%). It’s deeply concerning to think that a colleague would do something detrimental to their own company… just for fun.
Detecting and mitigating inside threats requires a completely different approach than what we (security teams) are used to when it comes to external threats. Insiders are active employees with active access and sometimes the actions these individuals take look completely normal to a security analyst.
Security awareness and education and overall company culture continue to be a very effective way to mitigate the risks of insider threats.
Data theft incidents are driven mostly by employees with little to no technical aptitude or organizational power. Regular users have access to sensitive and monetizable data and unfortunately too often are the ones behind most internal data breaches.
Code42’s Vijay Ramanathan, SVP Product Management, shared these thoughts:
Insider threat should no longer be a taboo subject for internal security teams. Denial has not helped – it has only resulted in time-to-discovery being months-to-years for most inside breaches. This is a massive blind spot for security teams. Also, this is a problem for all sorts of companies. Not just large ones.
The report outlines counter measures that companies should take as part of a comprehensive data security strategy. This is a great starting point. But those measures (outlined on page 7) are nonetheless complex and require skilled staff. This continues to be difficult for many companies, particularly smaller and mid-market organizations, to navigate, especially because of the chronic skills shortage in the security industry.
The “Careless Worker” is called out as one of the harder vectors to protect against. Security teams need to take a proactive, “data hunting” approach to help them understand where data lives and moves, when it leaves the organization, and in what situations data is at risk.
Robust data collection and preservation, along with behavior analytics, are models that can help organizations understand where accidental or deliberate data exposure/exfiltration may be occurring. This need is going to become even more stark in the next 12-36 months as companies come to terms with the reality that current data security tools, technologies and practices (eg. policy management, data classification, user blocking, highly-skilled security staff) are not designed for a much more fluid and unpredictable future.
Mark Wojtasiak, VP Portfolio Marketing highlighted:
Nowhere in the report did Verizon say the goal was to prevent insider threats – the focus was all about detection, investigation and response. Verizon even called out DLP as a monitoring tool, likely to the chagrin of legacy DLP providers.
The single biggest problem relative to insider threat is detecting them in the first place and the length of time it takes to detect one. I argue that most insider breaches go undetected altogether and the number of insider breaches are actually grossly underreported.
Detecting insider threats comes down to how effective a company is in defining, collecting, correlating, analyzing and reporting on insider indicators of compromise. This basically means “machining” a security analyst’s intuition.
Creating insider indicators of compromise is difficult because they rely heavily on what is considered “normal” or “abnormal,” which can vary greatly by company, department, job role, individual and the data itself. It’s a lot of work, so why not just use machine learning to do it?
Once an insider breach is detected and the investigation process starts, it can grow very complex quickly. Oftentimes multiple stakeholders are involved and organizations might hire or outsource digital forensic services, which can be expensive. There has to be a faster, simpler process, especially for small to mid-market companies, which can be devastated by insider threats.
Insider Threat Programs go way beyond the incident response process (detect – investigate – respond – communicate, etc.). Ongoing vulnerability audits and assessments are needed to fine tune the insider indicators of compromise.
I still find it shocking that data classification continues to be a must have – and that employees need to be trained, made aware of and actually take the steps to classify the data they create. Couldn’t it be an indicator of compromise in and of itself if an employee self-classifies data as non-sensitive, then exfiltrates it?
Finally, it is clear that the key to establishing an insider threat program is to start with the data (called “assets” in the report), and then move to people.
The rise of insider threats is a significant threat to every business and one that is often overlooked. While we all would like to think that employees’ intentions are good, we must prepare for malicious (or accidental) actions taken by those from within our organizations. And because up to 80 percent of a company’s value lies in its intellectual property, insiders are in the position to do serious harm to your business. Is your business prepared to minimize the impact of these data threats?
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
My first experience as a Code42 customer actually began when I started deploying Code42 as an intern at Maxim Integrated. At this point, we were really focused on protecting data from loss through data backup. Code42 taught me all about how to stand up internal servers and deployment application remotely. Really, working with Code42 was a godsend for me because it helped me advance in my career. It’s a big reason behind how I got to where I am.
Today, I am a system engineer at MACOM. In my role, I am responsible for deployment, integrating systems and protecting MACOM’s most valuable data as we continue our digital transformation. Unlike my past experience as a Code42 customer, MACOM’s story doesn’t begin with endpoint backup, it actually begins with data monitoring.
“ Every company needs to understand how their data is flowing. Especially as many organizations, like MACOM, undergo digital transformations. ”
We knew that we needed to understand what was happening in regard to the data on our endpoints, which led us to evaluating Code42’s Next-Gen Data Loss Protection solution. Having had a positive experience with Code42 in a past life, I was eager to learn more about this innovative new solution. It quickly became a match made in heaven.
While my experience with Code42 spans IT and security centric use cases, the common denominator across them all is data. Data is the core of any company’s competitive advantage. If somebody walks out with a prototype or design file on a USB, well then there it goes. Every company needs to understand how their data is flowing. Especially as many organizations, like MACOM, undergo digital transformations. It’s important to understand how data is moving between cloud services and USB drives.
MACOM has been a Code42 Next-Gen DLP customer for a little less than a year now, and we have already made significant strides related to protecting our most valuable data. In fact, I will be co-hosting a session at Evolution19 on this topic with Code42 SE, Isaac O’Connell. For a deeper dive into MACOM’s story, join Isaac and I on Wednesday, May 1 at 10:30 am for our session,Using Next-Gen DLP to Protect Data from Inside Threats.
I hope to see many of you in Denver and hear about your own evolution with Code42. Pun intended!
In the old days, security teams and engineering teams were highly siloed: security teams were concerned with things like firewalls, anti-virus and ISO controls, while engineering teams were concerned with writing and debugging code in order to pass it along to another team, like an operations team, to deploy. When they communicated, it was often in the stilted form of audit findings, vulnerabilities and mandatory OWASP Top Ten training classes that left both sides feeling like they were mutually missing the point.
While that may have worked in the past, the speed at which development happens today means that changes are needed on both sides of the equation to improve efficiency and reduce risk. In this blog post, I’ll be talking about why security teams need to learn to code (the flip side of the equation, why engineering teams need to learn security, may be a future blog post).
“ Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. ”
While it’s not uncommon for people to come into security having done code development work in the past, it is not necessarily the most typical career path. Oftentimes, people come into the security realm without any coding experience other than perhaps a Java or Python course they took at school or online. Because security encompasses so many different activities, there would appear to be no downside if security folks outside of a few highly specialized roles, like penetration testing, didn’t have coding experience. However, I’m here to tell you that coding can be beneficial to any security professional, no matter the role.
Let’s start with automation. No matter what you are doing in security, odds are that you have some kind of repeatable process, such as collecting data, doing analysis, or performing some action, that you can automate. Fortunately, more and more applications have APIs available to take advantage of, and are therefore candidates for writing code to do the work so you don’t have to.
At this point, you may think that this sounds a lot like a job for a Security Orchestration Automation and Response (SOAR) tool. A SOAR tool can absolutely be used to automate activities, but already having a SOAR tool is certainly not a requirement. A simple script that ties together a couple of applications via an API to ingest, transform and save data elsewhere may be all you need in order to start getting value out of coding. Plus, this can be a great way to determine how much value you may be able to get out of a full-blown SOAR tool.
Learning to code won’t just help your own efficiency. Writing your own code can help make all of those OWASP Top Ten vulnerabilities much more concrete, which can lead to better security requirements when collaborating with engineers. Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. It’s also incredibly valuable to be able to give engineers concrete solutions when they ask about how to remediate a particular vulnerability in code.
Here at Code42, our security team believes strongly in the value of learning to code. That’s why we’ve set a goal for our entire security team, no matter the role, to learn how to code and to automate at least one repetitive activity with code in 2019. By doing this, we will make our overall security team stronger, work more efficiently and provide more valuable information to our engineering teams.
No matter what we do in our jobs, we all want to provide value back to the organizations where we work. With some jobs, tangible evidence of value is very apparent, such as hitting your sales quota or building code for a feature in your software. In business continuity, that can be a bit of a challenge. To start, most people don’t understand what it is, or what responsibilities are tied to it. If someone asks me what I do, and my response is: “business continuity,” the conversation usually goes a different direction shortly thereafter. This makes it a challenge from the get-go in showing value to your company.
“ If ensuring value to the company is at the center of your decisions, it will go a long way in leading to a successful business continuity program. ”
Here are a few key principles I have learned in my business continuity journey, that have helped me show value within my organization:
Real simple, your business continuity program has to have this in order to succeed. If you think you’re fully prepared to respond and recover from a disaster without buy-in from leadership, you’re kidding yourself. Leadership needs to understand what you’re doing, why you’re doing it and how it will benefit their department and the company as a whole. This will give you top-level support and make your job easier. Having guidance from above will ensure your requests for resources for the purposes of a business impact analysis and recovery testing will be given.
No doubt getting leaderships attention can be a challenge, but it has to happen. I have been a part of organizations that didn’t have it, and the result was a program that could never meet its full potential because our requests for time and effort from other departments were never a priority.
At Code42, we worked with each member of our executive leadership team to outline what we were doing, why we’re doing it and what assistance we would need from their department. Department leaders were then able to give direction on who they wanted us to work with and set the whole program in motion.
Narrow the scope of your program
On the surface this seems counterintuitive. Why not cover every function and supporting activity? The reasoning is that most companies don’t have a dedicated team of employees focused on business continuity. For some, business continuity is simply one of many responsibilities they hold. Along with manpower, the further you head into supporting functions and away from what’s really critical, the lower the rate of return for the company. The key is to focus on what’s critical. I have experienced it firsthand, where my drive to make sure all business functions were documented and prepared for. It had me spending countless hours covering the full spectrum of the business. By the time I was finished, the data was already out of date and amounted to poor use of resources with little to no value for the company.
When we worked with each member of the executive leadership team at Code42, we kept our scope to the top two critical functions that each department performs. This helped our program avoid the minutiae and focus squarely on what’s critical for supporting our product, our customers and our employees.
Make the information accessible
The information for your business continuity program should not be sequestered away from your employees, it should be easy to view and update. This is a rather obvious statement, but one that I have seen many companies struggle with. Here at Code42, we made a misstep by thinking the solution to our business continuity challenges lie within a continuity software provider. The intent was for it to help us manage all of our data, produce plans and be a one-stop shop for all things business continuity. Not long after onboarding, challenges started to emerge. The biggest challenge, was the information was not accessible to the workforce. The other was that it didn’t tie in to any software already in use at Code42. It was on an island, and of little to no value to the business. A pivot was needed, and thankfully we didn’t have to go far for an answer.
The answer came from taking a step back and determining what tools employees use across the company on a day-to-day basis. For us, the answer laid within Confluence, which serves as our internal wiki. This is where we build out department focused pages and their respective critical functions, and dependencies. Connecting to Confluence allowed us to tie in another company-wide application, JIRA, for tickets related to vendor assessments and risk and incident tickets. Our focus throughout the process was to ensure value was being passed on to Code42 and its employees, and the key piece to that was having information easily accessible.
Business continuity has a number of inherent challenges, but if ensuring value to the company is at the center of your decisions it will go a long way in leading to a successful program. I hope these principles I laid out help you provide better value to your own company.
After unveiling our Next-Gen Data Loss Protection solution at the RSA Conference 2019 in San Francisco, just about every visitor to the Code42 booth asked: How is data loss protection different than data loss prevention?
To answer this question, I sat down with Dark Reading’s Terry Sweeney for a video interview. You’ll find the highlights of our conversation in a short video below — and you can watch the full interview at Dark Reading.
The home security analogy
I like to start with a simple analogy everyone can identify with: Let’s say a would-be burglar comes to your door while you’re at work. In theory, you can rest assured that the person will not break into your house — because you have locks on your doors, right? But we all know locks aren’t failsafe, so what if this individual does find a way in? You won’t know about any of this until you get home — hours later — or until you realize something is missing, perhaps days later. By then, it’s much harder to figure out what all was taken, who took it and when it was taken. That’s the problem with the traditional data loss prevention model: it’s focused on prevention — but if that fails, you’re not left with much.
Now, imagine you have Nest cams inside and outside your house. Your front-door Nest cam notifies you immediately, via smartphone, to activity at your front door. With real-time visibility, if you don’t recognize the face of the visitor and/or are concerned with the actions he takes next (e.g., picking the lock, breaking a window, etc.), you can take action right now. Even if you discover something missing later in the day, you have video logs that will help you figure out when that article was taken and how. Just like the Nest cams, Code42 Next-Gen Data Loss Protection shows you exactly what’s happening, when it’s happening — so you can decide if it’s important and take action now.
Paradigm shift: all data matters
Another major difference in approach between legacy data loss prevention and Code42 Next-Gen Data Loss Protection: how the tools define the value of data. Traditional DLP tools require an organization to decide which data and files are valuable or sensitive — and then figure out how to configure it with rules and policies. But today’s knowledge workers are constantly creating data — and it all matters. From developing new software, to innovating manufacturing processes or providing consulting services, more and more businesses across every sector are ultimately in the business of making new ideas. For these “progressive makers,” as we call them at Code42, every file and every piece of data holds value in the chain of idea creation. And the value of any given piece of data can skyrocket in an instant — when a project turns from theoretical tinkering into tangible innovation. Finally, while traditional forms of protected data like PCI, PII, HIPAA tend to follow predictable formats and patterns that can be recognized through rules, all of this “idea data” is wrapped up in largely unstructured data. The data relating to a software product launch, for example, might span from source code files, to Word documents containing marketing plans, to Excel spreadsheets with revenue forecasts and production budgets, to CRM data on target prospects. There’s no way to create a blanket “rule” for defining the structure or pattern of data relating to a valuable product launch.
“ In this new reality of endpoints and cloud where all data matters, Code42 offers an unmatched core capability: We’ve gotten really good at collecting and saving every file, from every user, on every device. ”
In this new reality of endpoints and cloud where all data matters, Code42 offers an unmatched core capability: We’ve gotten really good at collecting and saving every file, from every user, on every device. More importantly, we’ve gotten really good at doing it in near-real time, doing it cost-effectively and doing it without inhibiting users as they’re working. This means organizations no longer have to define, at the outset, what data matters. And this complete data collection unlocks the kind of immediate, comprehensive visibility that creates the foundation of data loss protection — and sets it apart from data loss prevention.
Two critical questions DLP buyers need to ask
One of my favorite questions from Terry Sweeney was, “What should a DLP buyer look for as they’re evaluating a solution?” My answer is simple:
How soon does the tool show you that something is going wrong?
How soon does the tool let you take action?
The most consistent and concerning finding from annual infosecurity reports like Verizon’s Data Breach Investigation Report and the Ponemon Institute’s Cost of Data Breach Study is that most organizations aren’t discovering incidents for weeks — or months. In fact, the Ponemon Institute’s 2018 research showed the average breach took 197 days for an organization to discover. That’s six months before the investigation even begins— and even longer until the organization can attempt to take some remedial action. That’s a lot of time for data to be lost, tracks to get covered and stolen IP to do damage to a business.
Code42 Next-Gen Data Loss Protection cuts that time-to-awareness from months to minutes. Take the common example of a departing employee: You’ll know if they’ve taken data before they even leave the building — not months later when a rival launches a competing product. Moreover, you’re getting immediate and full visibility around the context of the departing employee’s data removal — you can look at the exact file(s) and see if it’s valuable and/or sensitive — so you can make decisions and take action quickly and confidently.
Enabling infosec automation
My discussion with Terry ended with a look at perhaps the most important factor driving infosecurity forward: the expanding role of automation in helping organizations manage and protect ever-increasing volumes of data. Many organizations fight expanding data security threats with a small handful of infosecurity staff — half who are “on loan” from IT. Automation and orchestration platforms pull together and make sense of all the alerts, reports and other data from various infosecurity tools — fighting false positives and alert fatigue, and allowing them to see more and do more, with fewer human eyes. But these platforms are only as good as the inputs they’re fed. These platforms rely on comprehensive data feeds to ensure you can create the customized reports and alerts you need to reliably bolster your security automation. The complete security insights gathered by Code42 Next-Gen Data Loss Protection ensure there are no blind spots in that strategy. That’s why we’re focused on making sure all our tools plug into automation and orchestration platforms, and support the workflow automation capabilities you already have in place. All Code42 tools are available through APIs. If you want us to integrate data and alerts to be automatically provisioned in your SIEM or orchestration tool, we can do that. If you want us to automatically raise an email alert to your ticketing system, we can do that, too. Furthermore, Code42’s Next-Gen DLP allows you to take a more proactive “data-hunting” approach to data security, much like you would with threat hunting to deal with external malware and attacks.
This is where the value of Code42 Next-Gen Data Loss Protection gets really exciting. Our tool gives you incredible off-the-shelf value; it does things no other tool can. We’re seeing organizations integrating our tool with advanced automation and orchestration platforms — using our tool in ways we hadn’t even considered — and really amplifying the value and driving up their return on investment.
Watch the video highlights of the Dark Reading interview here or you can watch the full interview at Dark Reading.
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
How enterprises build, deploy and manage software apps and applications has been turned upside down in recent years. So too have long-held notions of who is a developer. Today, virtually anyone in an organization can become a developer—from traditional developers to quality assurance personnel to site reliability engineers.
Moreover, this trend includes an increasing number of traditional business workers, thanks to new low-code and so-called no-code development platforms. These platforms are making it possible for these non-traditional developers, sometimes called citizen developers, to build more apps the enterprise needs. Of course, whenever an organization has someone new developing code, it creates a situation that could potentially create new security, privacy and regulatory compliance risks.
“ Recently, at Code42, we trained our entire team, including anyone who works with customer data, to ensure everyone was using best practices to secure our production code and environments. ”
For most organizations, this means they must reconsider how they conduct security awareness and application security testing. Recently, at Code42, we trained our entire team, including anyone who works with customer data. This comprised of the research and development team, quality assurance, cloud operations, side reliability engineers, product developers and others to ensure everyone was using best practices to secure our production code and environments.
We knew we needed to be innovative with this training. We couldn’t take everyone and put them in a formal classroom environment for 40 hours. This isn’t the best format for many technologists to learn.
Instead, we selected a capture the flag (CTF) event. We organized into teams who would be presented with a number of puzzles designed to demonstrate common vulnerability mistakes, such as those in the OWASP Top 10. We wanted to create an engaging hands-on event where everyone could learn new concepts around authentication, encryption management and other practices.
We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. Watching the teams work through the CTF was fascinating because you could see their decision-making processes when it came to remediating the issues presented. For problems where a team wasn’t sure of the solution, we provided support training materials, including videos.
“ We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. ”
While the event was a success overall, we certainly learned quite a bit that will create a better experience for everyone in our next training.
Let me say, the CTF style was exceptional. The approach enabled individuals to choose areas they needed to learn, and the instructional videos were well received by those who used them. But I’ll tell you, not everyone was happy. About three-quarters of my team loved it, and then the other quarter wanted to grab pitchforks and spears and chase me down.
First, throughout the contest, a lack of a common development language proved to be a challenge. Within most of the teams, the engineers chose the problems that were in a language with which they were familiar. It often cut the quality assurance or site reliability engineers out from helping on that problem. No good.
Gamification, while well intended, caused problems. As I mentioned, we had instructional videos. That way, if a team didn’t know the answer to a problem, they could watch the videos, get guidance, and learn in the process. This added a time element to the project, which actually caused individuals to skip the videos.
How we implemented the leaderboards proved counterproductive. Remember how we all (well, many of us) feared being the last person picked in gym class growing up? Well, leaderboards shouldn’t be present until the game ends, and even then to summarize only the top finishers. No one likes to know they were in the bottom 10 percent, and it doesn’t help the learning process.
Dispel the fear. These are training and awareness classes. While official and credited security training is often a pass/fail outcome, this awareness training is for education. However, our employees feared their performance would somehow be viewed as bad and could affect their performance reviews—or employment. Face these rumors up front and make it clear the CTF results aren’t tied to work performance.
Overall, our team did learn valuable lessons using our CTF format — the innovative approach we took to educate them was successful in that way. But next time I hold a contest, we will definitely incorporate changes from the lessons above. And I’ll work harder to strike the balance between formal lecture and class setting versus competitive event when there are developers that present with varying experience and skillsets.
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
Imagine terabytes of corporate data exposed in the wild by employees sharing publicly available links on the cloud. Sound far fetched? It’s not. According to a recent article from SiliconANGLE, that’s exactly what happened when security researchers uncovered terabytes of data from over 90 companies exposed by employees sharing publicly available links to Box Inc.’s cloud storage platform. And while it’s easy to think that this problem is restricted to Box, it is in fact a problem most cloud services like Dropbox or OneDrive for Business need to address.
“ Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. ”
Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. This presents significant gaps in cloud security and compliance strategies and raises important questions such as:
What data is going to an employee’s personal cloud?
Who’s making a link public instead of sharing it with specific people?
Are departments or teams using other/non-sanctioned clouds to get their work done?
Are contractors getting more visibility than they should in these clouds?
Compounding the problem, the remedy that most cloud services provide to administrators is to “configure shared link default access” to users. Administrators can configure shared link access so accidental or malicious links can’t be created in the first place, however, there is a clear loss of productivity when users who need the continued collaboration and ability to share are mistakenly denied. This is where IT/security teams need to strike the fine balance between protecting corporate IP and enabling user productivity.
Code42’s approach to DLP doesn’t block users or shut down sharing, giving organizations visibility while there is a free flow of information between partners, customers and users in general. While understanding that a link has gone public in the first place, security protocols should further include:
Identifying files that are going to personal clouds
Understanding who’s sharing links publicly and why
Mitigating instances of non-sanctioned clouds
Gaining visibility into cloud privileges extended to contractors or other third parties
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?
After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.
“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”
To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:
Compliance with security policies
AWS tagging requirements
Identifying unused resources for removal/review
Off-hours are enforced to maximize cost reduction
Encryption needs are enforced
AWS Security Groups are not over permissive
And many more…
After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.
How Code42 uses Cloud Custodian
Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.
As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.
As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.
“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”
Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.
Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more. The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.
What policies did we implement?
There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.
Cost Savings Policy – ec2-off-hours-enforcement
EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.
Hygiene Policy – ec2-tag-enforcement
AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.
The policy applies filters to look for all EC2 resources missing the required tags.
When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.
If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.
This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.
At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.
This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket. It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.
Just starting out?
Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.
Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.
One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.
“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”
Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:
Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.
Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.
We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.
This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.
New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.
“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”
So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.
But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.
We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.
Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.
It’s time to face the facts, Macs are everywhere in the enterprise. In fact, a 2018 survey from Jamf pointed to the fact that more than half of enterprise organizations (52%) offer their employees a choice in their device of preference. Not entirely surprising, 72% of employees choose Mac. The Apple wave within business environments has begun and only promises to grow over time.
“ Legacy Data Loss Prevention (DLP) solutions don’t account for the Mac phenomenon and were not designed with them in mind. ”
The problem is that legacy Data Loss Prevention (DLP) solutions don’t account for the Mac phenomenon and were not designed with them in mind. As a result, legacy DLPs often approach Macs as an afterthought rather than a core strategy. Customer opinions of their DLP for Macs continue to be unfavorable. In fact, last year at Jamf’s JNUC event in Minneapolis, Mac users quickly revealed their sheer frustration with DLP and how it wasn’t built for Macs. Code42 customers currently using legacy DLP vendors vented about their Mac DLP experience saying, “It just sucks!”
Naturally, we asked why.
No Support – Mac updates can be fast and furious. Unfortunately, DLP has traditionally struggled to keep up with those updates. The result? Errors, Kernel panics and increased risk for data loss.
No OS Consistency – We often forget that today’s businesses often use both Mac and Windows. DLP has traditionally maintained a very Windows-centric approach that has made the Mac experience secondary and inconsistent with Windows. Having two sets of users with varying levels of data risk is never good.
It’s Slow – The number one issue often stems from performance-sucking agents that bring the productivity of Mac users to a screeching halt.
Kernel Panics – This is worth reiterating. Macs are sensitive to anything that poses a threat, so whenever perceived unsanctioned DLP software threatens Mac, it means reboots and an increased risk of downtime.
It’s Complicated – Traditional DLP still relies on legacy hardware and manual updates, which is time consuming and expensive.