Learnings From Verizon’s Insider Threat Report Code42 Blog

Learnings From Verizon’s Insider Threat Report

What does McKinsey call one of the largest unsolved issues in cybersecurity today? Insider threat. They noted that a staggering half of all breaches between 2012-2017 had an insider threat component. To make consequential strides in combatting insider threat, the topic must be explored further. Thanks to Verizon’s Threat Research Advisory Center, which produced the Verizon Insider Threat Report, we can take an in-depth look at the role insider threat plays in the broader cyber threat landscape.

The Verizon report draws on statistics from their Data Breach Incident Reports and lessons learned from hundreds of investigations conducted by their internal forensics teams. It highlights the ease with which insiders exfiltrate data, while detection on the other hand often takes far longer.

“ Insider threat should no longer be a taboo subject for internal security teams. Denial has not helped – it has only resulted in time-to-discovery being months-to-years for most inside breaches. ”

A trio of Code42’s leading experts on insider threat shared their reactions to the report. Read on to find out their most compelling takeaways.

Jadee Hanson, CISO and VP Information Systems for Code42 called out:

  • The top motivations for insider threats include financial gain (48%), which is not surprising. This is followed second by FUN (23%). It’s deeply concerning to think that a colleague would do something detrimental to their own company… just for fun. 
  • Detecting and mitigating inside threats requires a completely different approach than what we (security teams) are used to when it comes to external threats. Insiders are active employees with active access and sometimes the actions these individuals take look completely normal to a security analyst. 
  • Security awareness and education and overall company culture continue to be a very effective way to mitigate the risks of insider threats. 

  • Data theft incidents are driven mostly by employees with little to no technical aptitude or organizational power. Regular users have access to sensitive and monetizable data and unfortunately too often are the ones behind most internal data breaches.

Code42’s Vijay Ramanathan, SVP Product Management, shared these thoughts: 

  • Insider threat should no longer be a taboo subject for internal security teams. Denial has not helped – it has only resulted in time-to-discovery being months-to-years for most inside breaches. This is a massive blind spot for security teams. Also, this is a problem for all sorts of companies. Not just large ones.

  • The report outlines counter measures that companies should take as part of a comprehensive data security strategy. This is a great starting point. But those measures (outlined on page 7) are nonetheless complex and require skilled staff. This continues to be difficult for many companies, particularly smaller and mid-market organizations, to navigate, especially because of the chronic skills shortage in the security industry. 

  • The “Careless Worker” is called out as one of the harder vectors to protect against. Security teams need to take a proactive, “data hunting” approach to help them understand where data lives and moves, when it leaves the organization, and in what situations data is at risk.

  • Robust data collection and preservation, along with behavior analytics, are models that can help organizations understand where accidental or deliberate data exposure/exfiltration may be occurring. This need is going to become even more stark in the next 12-36 months as companies come to terms with the reality that current data security tools, technologies and practices (eg. policy management, data classification, user blocking, highly-skilled security staff) are not designed for a much more fluid and unpredictable future.

Mark Wojtasiak, VP Portfolio Marketing highlighted: 

  • Nowhere in the report did Verizon say the goal was to prevent insider threats – the focus was all about detection, investigation and response. Verizon even called out DLP as a monitoring tool, likely to the chagrin of legacy DLP providers.
  • The single biggest problem relative to insider threat is detecting them in the first place and the length of time it takes to detect one. I argue that most insider breaches go undetected altogether and the number of insider breaches are actually grossly underreported.
  • Detecting insider threats comes down to how effective a company is in defining, collecting, correlating, analyzing and reporting on insider indicators of compromise. This basically means “machining” a security analyst’s intuition.
  • Creating insider indicators of compromise is difficult because they rely heavily on what is considered “normal” or “abnormal,” which can vary greatly by company, department, job role, individual and the data itself. It’s a lot of work, so why not just use machine learning to do it? 
  • Once an insider breach is detected and the investigation process starts, it can grow very complex quickly. Oftentimes multiple stakeholders are involved and organizations might hire or outsource digital forensic services, which can be expensive. There has to be a faster, simpler process, especially for small to mid-market companies, which can be devastated by insider threats.
  • Insider Threat Programs go way beyond the incident response process (detect – investigate – respond – communicate, etc.). Ongoing vulnerability audits and assessments are needed to fine tune the insider indicators of compromise.
  • I still find it shocking that data classification continues to be a must have – and that employees need to be trained, made aware of and actually take the steps to classify the data they create. Couldn’t it be an indicator of compromise in and of itself if an employee self-classifies data as non-sensitive, then exfiltrates it? 
  • Finally, it is clear that the key to establishing an insider threat program is to start with the data (called “assets” in the report), and then move to people. 

The rise of insider threats is a significant threat to every business and one that is often overlooked. While we all would like to think that employees’ intentions are good, we must prepare for malicious (or accidental) actions taken by those from within our organizations. And because up to 80 percent of a company’s value lies in its intellectual property, insiders are in the position to do serious harm to your business. Is your business prepared to minimize the impact of these data threats?

Code42 Evolution19

My Career and Data Security Evolution

My first experience as a Code42 customer actually began when I started deploying Code42 as an intern at Maxim Integrated. At this point, we were really focused on protecting data from loss through data backup. Code42 taught me all about how to stand up internal servers and deployment application remotely. Really, working with Code42 was a godsend for me because it helped me advance in my career. It’s a big reason behind how I got to where I am.

Today, I am a system engineer at MACOM. In my role, I am responsible for deployment, integrating systems and protecting MACOM’s most valuable data as we continue our digital transformation. Unlike my past experience as a Code42 customer, MACOM’s story doesn’t begin with endpoint backup, it actually begins with data monitoring.

“ Every company needs to understand how their data is flowing. Especially as many organizations, like MACOM, undergo digital transformations. ”

We knew that we needed to understand what was happening in regard to the data on our endpoints, which led us to evaluating Code42’s Next-Gen Data Loss Protection solution. Having had a positive experience with Code42 in a past life, I was eager to learn more about this innovative new solution. It quickly became a match made in heaven.

While my experience with Code42 spans IT and security centric use cases, the common denominator across them all is data. Data is the core of any company’s competitive advantage. If somebody walks out with a prototype or design file on a USB, well then there it goes. Every company needs to understand how their data is flowing. Especially as many organizations, like MACOM, undergo digital transformations. It’s important to understand how data is moving between cloud services and USB drives.

MACOM has been a Code42 Next-Gen DLP customer for a little less than a year now, and we have already made significant strides related to protecting our most valuable data. In fact, I will be co-hosting a session at Evolution19 on this topic with Code42 SE, Isaac O’Connell. For a deeper dive into MACOM’s story, join Isaac and I on Wednesday, May 1 at 10:30 am for our session, Using Next-Gen DLP to Protect Data from Inside Threats.

I hope to see many of you in Denver and hear about your own evolution with Code42. Pun intended!

Tips From the Trenches: Security Needs to Learn to Code Code42 Blog

Tips From the Trenches: Security Needs to Learn to Code

In the old days, security teams and engineering teams were highly siloed: security teams were concerned with things like firewalls, anti-virus and ISO controls, while engineering teams were concerned with writing and debugging code in order to pass it along to another team, like an operations team, to deploy. When they communicated, it was often in the stilted form of audit findings, vulnerabilities and mandatory OWASP Top Ten training classes that left both sides feeling like they were mutually missing the point.

While that may have worked in the past, the speed at which development happens today means that changes are needed on both sides of the equation to improve efficiency and reduce risk. In this blog post, I’ll be talking about why security teams need to learn to code (the flip side of the equation, why engineering teams need to learn security, may be a future blog post).

“ Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. ”

While it’s not uncommon for people to come into security having done code development work in the past, it is not necessarily the most typical career path. Oftentimes, people come into the security realm without any coding experience other than perhaps a Java or Python course they took at school or online. Because security encompasses so many different activities, there would appear to be no downside if security folks outside of a few highly specialized roles, like penetration testing, didn’t have coding experience. However, I’m here to tell you that coding can be beneficial to any security professional, no matter the role.

Let’s start with automation. No matter what you are doing in security, odds are that you have some kind of repeatable process, such as collecting data, doing analysis, or performing some action, that you can automate. Fortunately, more and more applications have APIs available to take advantage of, and are therefore candidates for writing code to do the work so you don’t have to.

At this point, you may think that this sounds a lot like a job for a Security Orchestration Automation and Response (SOAR) tool. A SOAR tool can absolutely be used to automate activities, but already having a SOAR tool is certainly not a requirement. A simple script that ties together a couple of applications via an API to ingest, transform and save data elsewhere may be all you need in order to start getting value out of coding. Plus, this can be a great way to determine how much value you may be able to get out of a full-blown SOAR tool.

Learning to code won’t just help your own efficiency. Writing your own code can help make all of those OWASP Top Ten vulnerabilities much more concrete, which can lead to better security requirements when collaborating with engineers. Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. It’s also incredibly valuable to be able to give engineers concrete solutions when they ask about how to remediate a particular vulnerability in code.

Here at Code42, our security team believes strongly in the value of learning to code. That’s why we’ve set a goal for our entire security team, no matter the role, to learn how to code and to automate at least one repetitive activity with code in 2019. By doing this, we will make our overall security team stronger, work more efficiently and provide more valuable information to our engineering teams.

Happy coding!

Connect with Nathan Hunstad on LinkedIn.

Tips From the Trenches: Providing Value Through Business Continuity Code42 Blog

Tips From the Trenches: Providing Value Through Business Continuity

No matter what we do in our jobs, we all want to provide value back to the organizations where we work. With some jobs, tangible evidence of value is very apparent, such as hitting your sales quota or building code for a feature in your software. In business continuity, that can be a bit of a challenge. To start, most people don’t understand what it is, or what responsibilities are tied to it. If someone asks me what I do, and my response is: “business continuity,” the conversation usually goes a different direction shortly thereafter. This makes it a challenge from the get-go in showing value to your company.

“ If ensuring value to the company is at the center of your decisions, it will go a long way in leading to a successful business continuity program. ”

Here are a few key principles I have learned in my business continuity journey, that have helped me show value within my organization:

Leadership buy-in

Real simple, your business continuity program has to have this in order to succeed. If you think you’re fully prepared to respond and recover from a disaster without buy-in from leadership, you’re kidding yourself. Leadership needs to understand what you’re doing, why you’re doing it and how it will benefit their department and the company as a whole. This will give you top-level support and make your job easier. Having guidance from above will ensure your requests for resources for the purposes of a business impact analysis and recovery testing will be given.

No doubt getting leaderships attention can be a challenge, but it has to happen. I have been a part of organizations that didn’t have it, and the result was a program that could never meet its full potential because our requests for time and effort from other departments were never a priority.

At Code42, we worked with each member of our executive leadership team to outline what we were doing, why we’re doing it and what assistance we would need from their department. Department leaders were then able to give direction on who they wanted us to work with and set the whole program in motion.

Narrow the scope of your program

On the surface this seems counterintuitive. Why not cover every function and supporting activity? The reasoning is that most companies don’t have a dedicated team of employees focused on business continuity. For some, business continuity is simply one of many responsibilities they hold. Along with manpower, the further you head into supporting functions and away from what’s really critical, the lower the rate of return for the company. The key is to focus on what’s critical. I have experienced it firsthand, where my drive to make sure all business functions were documented and prepared for. It had me spending countless hours covering the full spectrum of the business. By the time I was finished, the data was already out of date and amounted to poor use of resources with little to no value for the company.

When we worked with each member of the executive leadership team at Code42, we kept our scope to the top two critical functions that each department performs. This helped our program avoid the minutiae and focus squarely on what’s critical for supporting our product, our customers and our employees.

Make the information accessible

The information for your business continuity program should not be sequestered away from your employees, it should be easy to view and update. This is a rather obvious statement, but one that I have seen many companies struggle with. Here at Code42, we made a misstep by thinking the solution to our business continuity challenges lie within a continuity software provider. The intent was for it to help us manage all of our data, produce plans and be a one-stop shop for all things business continuity. Not long after onboarding, challenges started to emerge. The biggest challenge, was the information was not accessible to the workforce. The other was that it didn’t tie in to any software already in use at Code42. It was on an island, and of little to no value to the business. A pivot was needed, and thankfully we didn’t have to go far for an answer.

The answer came from taking a step back and determining what tools employees use across the company on a day-to-day basis. For us, the answer laid within Confluence, which serves as our internal wiki. This is where we build out department focused pages and their respective critical functions, and dependencies. Connecting to Confluence allowed us to tie in another company-wide application, JIRA, for tickets related to vendor assessments and risk and incident tickets. Our focus throughout the process was to ensure value was being passed on to Code42 and its employees, and the key piece to that was having information easily accessible.

Business continuity has a number of inherent challenges, but if ensuring value to the company is at the center of your decisions it will go a long way in leading to a successful program. I hope these principles I laid out help you provide better value to your own company.

Connect with Loren Sadlack on LinkedIn.

Successful Software Security Training Lessons Learned Code42 Blog

Successful Software Security Training Lessons Learned

How enterprises build, deploy and manage software apps and applications has been turned upside down in recent years. So too have long-held notions of who is a developer. Today, virtually anyone in an organization can become a developer—from traditional developers to quality assurance personnel to site reliability engineers. 

Moreover, this trend includes an increasing number of traditional business workers, thanks to new low-code and so-called no-code development platforms. These platforms are making it possible for these non-traditional developers, sometimes called citizen developers, to build more apps the enterprise needs. Of course, whenever an organization has someone new developing code, it creates a situation that could potentially create new security, privacy and regulatory compliance risks. 

“ Recently, at Code42, we trained our entire team, including anyone who works with customer data, to ensure everyone was using best practices to secure our production code and environments. ”

For most organizations, this means they must reconsider how they conduct security awareness and application security testing. Recently, at Code42, we trained our entire team, including anyone who works with customer data. This comprised of the research and development team, quality assurance, cloud operations, side reliability engineers, product developers and others to ensure everyone was using best practices to secure our production code and environments. 

We knew we needed to be innovative with this training. We couldn’t take everyone and put them in a formal classroom environment for 40 hours. This isn’t the best format for many technologists to learn.

Instead, we selected a capture the flag (CTF) event. We organized into teams who would be presented with a number of puzzles designed to demonstrate common vulnerability mistakes, such as those in the OWASP Top 10. We wanted to create an engaging hands-on event where everyone could learn new concepts around authentication, encryption management and other practices. 

We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. Watching the teams work through the CTF was fascinating because you could see their decision-making processes when it came to remediating the issues presented. For problems where a team wasn’t sure of the solution, we provided support training materials, including videos.

“ We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. ”

While the event was a success overall, we certainly learned quite a bit that will create a better experience for everyone in our next training. 

Let me say, the CTF style was exceptional. The approach enabled individuals to choose areas they needed to learn, and the instructional videos were well received by those who used them. But I’ll tell you, not everyone was happy. About three-quarters of my team loved it, and then the other quarter wanted to grab pitchforks and spears and chase me down.

First, throughout the contest, a lack of a common development language proved to be a challenge. Within most of the teams, the engineers chose the problems that were in a language with which they were familiar. It often cut the quality assurance or site reliability engineers out from helping on that problem. No good.

Gamification, while well intended, caused problems.  As I mentioned, we had instructional videos. That way, if a team didn’t know the answer to a problem, they could watch the videos, get guidance, and learn in the process. This added a time element to the project, which actually caused individuals to skip the videos.

How we implemented the leaderboards proved counterproductive. Remember how we all (well, many of us) feared being the last person picked in gym class growing up?  Well, leaderboards shouldn’t be present until the game ends, and even then to summarize only the top finishers. No one likes to know they were in the bottom 10 percent, and it doesn’t help the learning process.

Dispel the fear. These are training and awareness classes. While official and credited security training is often a pass/fail outcome, this awareness training is for education. However, our employees feared their performance would somehow be viewed as bad and could affect their performance reviews—or employment. Face these rumors up front and make it clear the CTF results aren’t tied to work performance.  

Overall, our team did learn valuable lessons using our CTF format — the innovative approach we took to educate them was successful in that way. But next time I hold a contest, we will definitely incorporate changes from the lessons above. And I’ll work harder to strike the balance between formal lecture and class setting versus competitive event when there are developers that present with varying experience and skillsets.

  

Security Pitfalls of Shared Public Links Code42 Blog

Security Pitfalls of Shared Public Links

Imagine terabytes of corporate data exposed in the wild by employees sharing publicly available links on the cloud. Sound far fetched? It’s not. According to a recent article from SiliconANGLE, that’s exactly what happened when security researchers uncovered terabytes of data from over 90 companies exposed by employees sharing publicly available links to Box Inc.’s cloud storage platform. And while it’s easy to think that this problem is restricted to Box, it is in fact a problem most cloud services like Dropbox or OneDrive for Business need to address.

“ Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. ”

Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. This presents significant gaps in cloud security and compliance strategies and raises important questions such as:

  • What data is going to an employee’s personal cloud?
  • Who’s making a link public instead of sharing it with specific people?
  • Are departments or teams using other/non-sanctioned clouds to get their work done?
  • Are contractors getting more visibility than they should in these clouds?

Compounding the problem, the remedy that most cloud services provide to administrators is to “configure shared link default access” to users. Administrators can configure shared link access so accidental or malicious links can’t be created in the first place, however, there is a clear loss of productivity when users who need the continued collaboration and ability to share are mistakenly denied. This is where IT/security teams need to strike the fine balance between protecting corporate IP and enabling user productivity.

Code42’s approach to DLP doesn’t block users or shut down sharing, giving organizations visibility while there is a free flow of information between partners, customers and users in general. While understanding that a link has gone public in the first place, security protocols should further include:

  • Identifying files that are going to personal clouds
  • Understanding who’s sharing links publicly and why
  • Mitigating instances of non-sanctioned clouds
  • Gaining visibility into cloud privileges extended to contractors or other third parties
Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance Code42 Blog

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance

“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?

After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.

“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”

To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:

  • Compliance with security policies
  • AWS tagging requirements
  • Identifying unused resources for removal/review
  • Off-hours are enforced to maximize cost reduction
  • Encryption needs are enforced
  • AWS Security Groups are not over permissive
  • And many more…

After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.

How Code42 uses Cloud Custodian

Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.

As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.

As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.

“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”

Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.

Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more.  The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.

What policies did we implement?

There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.

Cost Savings Policy – ec2-off-hours-enforcement

EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.

Hygiene Policy – ec2-tag-enforcement

AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.

  1. The policy applies filters to look for all EC2 resources missing the required tags.
  2. When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
  3. The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.

If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.

This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.

Security Policy – S3-delete-unencrypted-on-creation

At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.

This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket.  It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.

Just starting out?

Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.

Resources:

Authors:

Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.

Byron Enos Code42

Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.

Code42 Jim Razmus

Jim Razmus II is director of cloud architecture at Code42. He tames complexity, seeks simplicity and designs elegantly. Connect with Jim Razmus II on LinkedIn.

Tips From the Trenches: Automating Change Management for DevOps

One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.

“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”

Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:

  • PREVENT issues (collusion, duplicate changes)
  • CONFIRM changes are authorized changes
  • DETECT issues (customer support, incident investigation)
  • COMPLY with regulatory requirements

Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.

Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.

We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.

This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.

New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.

“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”

So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.

But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.

We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.

Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.

Connect with Michelle Killian on LinkedIn.

Rob-Securing-Data-in-Cloud-Chaos-Code42-Blog

Securing Data in Cloud Chaos

To succeed, every enterprise depends on data and the insights that can be gleaned from that data. Enterprises today are creating much more data than in prior years—much of it critical to their digital transformation efforts. And how this data is stored within enterprises has changed dramatically, which is having a profound impact on how that data must be secured.

How so? At one time, most enterprise data resided within enterprise databases and applications, and these applications remained (relatively) safely on enterprise endpoints or tucked back in the data center. Not anymore.

“ Gartner estimates that 80 percent of all corporate data today is actually stored unstructured. ”

That was the age of structured data. Today, data is more likely to be stored unstructured and reside in the form of word-processing files, spreadsheets, presentations, PDFs and many other common formats. The research firm Gartner estimates that 80 percent of all corporate data today is actually stored unstructured.

This means today our enterprise data is scattered everywhere. And just because it’s not structured within an application doesn’t mean the data isn’t critical – unstructured data today includes financial information, trade secrets, marketing plans and work with contractors and business partners. Not all of this data is the same nor is it managed in the same way — yet this data must be protected.

How we share unstructured data is also changing. No longer is data sent merely as email attachments. Today, data is shared through social media programs, cloud apps and communication platforms, such as Slack. In many organizations, staff are sharing sensitive data, such as consumer information, intellectual property, prospect lists, financial data and the like. Security teams need to be alerted when sensitive information is shared.

These trends should cause pause within anyone concerned about securing their enterprise information.

“ One of the most important steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. ”

According to our 2018 Data Exposure Report, 73 percent of security and IT leaders believe there is data in their company that only exists on endpoints and 80 percent of CISOs agree that they can’t protect what they can’t see. Seventy-four percent believe IT and security teams should have full visibility over corporate data.

Unfortunately, without a dedicated and continuous focus on securing unstructured data, such visibility won’t ever exist. Only chaos. 

Yes, most organizations take reasonable steps to protect their applications and databases from costly data breaches. They invest in endpoint technologies that protect their users’ endpoints from malware. They focus on database security, application security and related efforts. And they try to control access to their local enterprise network. But the challenging reality remains: even if an organization executed perfectly on such a security architecture, it would still leave itself open to a vast amount of data theft and exploitation. The reason is that organizations are ignoring roughly 80 percent of their data that exists unstructured. 

Legacy security methods haven’t kept pace

It’s critical enterprises get the security of their unstructured data right. Securing unstructured data is different than securing data stored within applications and databases. 

One of the most important, and likely first, steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. Other capabilities they’ll need in place will include monitoring who has access to that data, the ability to index file content across storage devices, cloud storage, and cloud services, and monitor that data for potential data loss, misuse and theft.

Having these capabilities in place will not only help organizations to better secure that data and identify careless handling of data or even malicious insiders, but also improve the ability to conduct in-depth investigations and identify threats, preserve data for regulatory compliance demands and litigation situations, and rapidly recover lost or ransomed files.

The fact is that unstructured data is 80 percent of enterprise data today, and the places it’s being stored are expanding. It’s imperative you give it the appropriate level of focus. While you can’t put the unstructured data back in the centralized data center again, you can bring a structured approach to data security that will reign in the chaos and adequately protect your enterprise information.

Code42 Data Loss Protection: Redefining DLP

Data Loss Protection: Redefining DLP

Data is one of the most valuable currencies in existence today. Companies can thrive or die based on data, and attackers—from run-of-the-mill hackers, to cybercrime syndicates, to nation states—aggressively target data. It’s no wonder that an entire industry of tools and practices exists for the sole purpose of securing and protecting data. However, data loss and data breaches are still a constant concern.

Perhaps the model of data loss prevention—or DLP—itself is flawed? I recently had an opportunity to speak with Vijay Ramanathan, senior vice president of product management at Code42, about this issue and about the unique perspective Code42 has on solving the DLP problem.

“Fundamentally—at its core—even the notion of what DLP stands for is different for us,” opened Vijay. “You know DLP as ‘data loss prevention’. We approach it as ‘data loss protection’.”

“ Rather than focusing all of the attention on prevention, it’s important to acknowledge that there’s a high probability that incidents will still occur, and have the tools in place to detect when that happens. ”

That is clever and makes for good marketing but changing a word around is just semantics. I asked Vijay to explain what that means for customers, and why he—and Code42—believe that is a superior or more effective way to tackle this problem.

He explained, “We want to look at data and data loss more holistically rather than just putting prevention strategies in place.” He went on to compare the approach to the way we treat other things in life—like our homes. He pointed out that people have locks on doors to prevent unauthorized access, but that many also augment them with alarm systems, and surveillance cameras and home insurance to create a well-rounded home security strategy. Data security should be no different. 

Traditional DLP is fundamentally flawed

Vijay described why the traditional approach to DLP is broken.

The standard model of DLP requires organizations to define which data is sensitive or confidential, and which data is trivial or meaningless. There has to be an initial effort to catalog and assign classifications to all existing data, and an ongoing process for users to assign classification tags to data as new data is created. 

If you only have a few people, or a relatively small amount of data, this approach may be feasible. But, for most organizations, it is challenging—bordering on impossible—to effectively implement data labeling policies, or maintain accurate asset tagging at scale.

The second issue Vijay mentioned was that DLP often creates new issues. He told me that data classification and data handling policies are designed to prevent bad things from happening, but implementing additional policies is like protecting your home by building a taller fence. It only goes so far as a means of data protection, and it forces bad behavior by users. Employees who just want to get their jobs done will often subvert or circumvent the system, or intentionally mis-classify data to avoid draconian policies.

“ Time to awareness or time to response is the most critical issue in cybersecurity today. The lag time before a company discovers a data loss incident is crucial. ”

Protection rather than prevention

So, what does Code42 do differently, and how does that translate to better data security? I asked Vijay to explain how the Code42 approach of data loss protection addresses these issues. 

“The whole approach of traditional DLP solutions seems highly problematic,” proclaimed Vijay. “Why don’t we just assume that all of the data is important? What’s important then is to make sure you understand what is happening with your data, so you can make reasonable, informed judgments about whether that access or activity makes sense or not.”

More locks and taller fences might work to some extent, but they will never be impervious. Rather than focusing all of the attention on prevention, it’s important to acknowledge that there’s a high probability that incidents will still occur, and have the tools in place to detect when that happens. 

Vijay stressed the importance of response time—and how quickly an organization can know what is happening. “Time to awareness or time to response is the most critical issue in cybersecurity today. The lag time before a company discovers a data loss incident is crucial.”

He explained that Code42 adopted a next generation DLP philosophy with no policies and no blocking. Code42 assumes that all data is important and provides customers with the visibility to know who is accessing it and where it is going, and the ability to monitor and alert without preventing users from doing legitimate work or hindering productivity in any way.

With this philosophy in mind, the company recently introduced its Code42 Next-Gen Data Loss Protection solution. It monitors and logs all activity. Within minutes of an event, Code42 can let you know that a file was edited or saved. Within 15 minutes, the file itself is captured and stored in the cloud. Customers can store every version of every file for as long as they choose to retain the data. Code42 also provides an industry-best search capability that allows all data from the previous 90 days to be quickly and easily searched at any time. 

Vijay shared that he believes the Code42 Next-Gen Data Loss Protection approach to data security —is a better and more effective way to address this problem. Taking blocking and policies out of the equation makes it easier to administer and allows users to focus on being productive and efficient. The DVR-like ability to review activity from the past and establish focus on suspicious activity day-in-day-out provides customers with the peace of mind they need that their data is safe and sound.