Code42 Policy-Free DLP- It’s Time to Rethink Data Protection

It’s Time to Rethink DLP

As much as we may not like to talk about it, half of the major threats to the security of our corporate data come from the inside. That doesn’t mean that our employees are all malicious — insider threats can surface in many ways: user errors and accidents, lost or stolen devices, even hardware failures — and the list goes on. In fact, a report by International Data Group (IDC) showed that three of the top five most common high-value information incidents involve insiders.

Given this, it’s no surprise that for years, organizations have been using data loss prevention (DLP) solutions to try to prevent data loss incidents. The problem is that the prevention-first approach of legacy DLP solutions no longer meets the needs of today’s IP-rich, culturally progressive organizations, which thrive on mobility, collaboration and speed. The rigid “trust no one” policies of legacy DLP can block user productivity and are often riddled with exceptions and loopholes. For IT, legacy DLP solutions can be expensive to deploy and manage — and only protect selected subsets of files.

“ The prevention-first approach of legacy DLP solutions no longer meets the needs of today’s IP-rich, culturally progressive organizations, which thrive on mobility, collaboration and speed. ”

A fresh start

The prevention focus of traditional DLP forces a productivity trade-off that isn’t right for all companies — and isn’t successfully stopping data breaches. That’s why it’s time for organizations to rethink the very concept of DLP and shift their focus from prevention to protection. Next-generation data loss protection (next-gen DLP) enables security, IT and legal teams to more quickly and easily protect their organization’s data while fostering and maintaining the open and collaborative culture their employees need to get their work done.

Rather than enforcing strict prevention policies that block the day-to-day work of employees, next-gen DLP clears the way for innovation and collaboration by providing full visibility to where files live and move. This approach allows security and IT teams to monitor, detect and respond to suspicious file activity in near real-time.

Next-gen DLP benefits

This next-gen approach to data protection provides the following benefits:

Works without policies: Unlike legacy DLP solutions, next-gen DLP does not require policies — so there is no complex policy management. Because next-gen DLP automatically collects and stores every version of every file across all endpoints, there is no need to set policies around certain types of data. When data loss incidents strike, affected files are already collected, so security and IT teams can simply investigate, preserve and restore them with ease — whether the incident affected one file, multiple files or multiple devices.

Removes productivity blocks: Next-gen DLP enables employees to work without hindering productivity and collaboration. Workers are not slowed down by “prevention-first” policies that inevitably misdiagnose events and interfere with their ability to access and use data to do their work.

Lives in the cloud: As a cloud-native solution, next-gen DLP solutions are free from expensive and challenging hardware management, as well as the complex and costly modular architectures that are common with legacy DLP.

Deploys in days: Next-gen DLP solutions can be rapidly implemented, since the extensive time and effort required to create and refine legacy DLP policies is not needed. Since it works without policy requirements, next-gen DLP is also much easier to manage once deployed than legacy DLP. This is especially important for smaller organizations that can’t wait months or even years for a solution to be fully implemented.

Provides access to every file: While next-gen DLP doesn’t require blanket policies, security teams can still use it to observe and verify employee data use. For example, next-gen DLP can alert administrators when an unusually large number of files are transferred to removable media or cloud services. If the files have left the organization, next-gen DLP can see exactly what was taken and restore those files for rapid investigation and response.

By focusing on all files in an organization, next-gen DLP offers many additional benefits:

  • Visibility into file activity across endpoints and cloud services to speed security investigations. This differs from legacy DLP, which only provides a view of defined subset of data.
  • Fast retrieval of file contents and historical file versions to perform detailed analysis or recovery from data incidents. Legacy DLP solutions don’t collect the contents of files and thus can’t make them available for analysis or recovery.
  • Long-term file retention to help satisfy legal and compliance requirements as well as provide a complete data history for as long a time period as an organization requires. Again, legacy solutions don’t retain file contents and so aren’t able to provide this history.

A new paradigm for DLP

Next-gen DLP is a huge departure from legacy DLP solutions, but it’s a logical and necessary evolution of the category given the changing needs and work preferences of today’s IP-rich and culturally progressive organizations — small, mid-size and large.

Armed with a more discerning tool, organizations no longer have to lock down or block data access with restrictive policies. With full visibility into where every file lives and moves, security teams can collect, monitor, investigate, preserve and recover valuable company data in the event of a data loss incident.

Companies today are looking for better ways to protect their high-value data — while freeing knowledge workers to create the ideas that drive the business. By choosing to implement next-gen DLP, organizations will be able to keep their vital data protected without hindering productivity and innovation.

Code42 Tips From the Trenches- Threat-Hunting Weapons

Tips From the Trenches: Threat-Hunting Weapons

When it comes to cybersecurity, too many enterprises remain on a reactive footing. This ends up being a drag on their efforts because, rather than getting ahead of the threats that target their systems, they spend too much of their time reacting to security alerts and incidents within their environments.

While being able to react to attacks quickly is important for any security team, it’s also important to get out in front of potential risks to identify threats lurking within your systems before they become active.

In this post, we’ll explain how threat hunting within one’s environment can help to break that reactive cycle and improve the effectiveness of any security program.

“ You don’t need a large security organization or any special security tools to start to proactively threat hunt; any security team can start threat hunting, and often using the tools they already have. ”

Threat hunting defined

Before going forward, let’s first take a step back and define what we mean by threat hunting. Essentially, threat hunting is the proactive search for evidence of undetected malicious activity or compromise. These threats can include anything from remote-access tools beaconing to an attacker’s command and control server to malicious actions of an employee or other trusted insider.

Threat hunting is essential for effective security for many reasons. First, defensive security technologies such as intrusion detection/prevention systems and anti-malware software will never successfully identify and block all malware or attacks. Some things are just going to get through. Second, by finding malware and threats that made it past your defenses, you’ll be able to more effectively secure your systems and make your environment much harder for attackers to exploit. Finally, getting adept at finding threats in your environment will improve your organization’s overall ability to respond to threats and, as a result, over time dramatically improve your security posture.

Your arsenal

Because threat hunting entails looking for things that have yet to trigger alerts — if they ever would trigger alerts, to begin with — it is important to look deeper for evidence of compromise. Fortunately, you don’t need a large security organization or any special security tools to start to proactively threat hunt; any security team can start threat hunting, and often using the tools they already have.

For instance, many of the data sources used in threat hunting will be found in firewall, proxy and endpoint logs. While these sources of data probably aren’t alerting on anything malicious, they still hold a considerable amount of security data that can point to potential indicators that an environment has been breached under their radar.

Other readily available tools are helpful for threat analysis, such as Bro (https://www.bro.org/), RITA (https://github.com/activecm/rita), or OSQuery (https://osquery.io/). These tools will help provide additional visibility into network and endpoint data that could provide insights into potential compromise. With these tools, teams can monitor internal network activity, such as virus outbreaks and lateral movements of data. Monitoring East-West network traffic in addition to what is moving through the firewall provides critical insights to the overall health of your network.

The investigation capabilities of Code42 Next-Gen Data Loss Protection (DLP) can be extremely helpful for threat hunting, for determining how widespread a file is distributed in the environment, and to give information about file lifecycle, all of which provide context around whether a file is business-related or suspicious. For example, with Code42 Next-Gen DLP, you can search by MD5 hash or SHA-256 to find all instances of a sensitive file in your organization, or determine if known malware has been detected in your organization.

New tools and new ways of thinking may seem overwhelming at first. However, threat hunting doesn’t have to be all-consuming. You can start with committing a modest amount of time to the hunt, and incrementally build your threat hunting capability over weeks and months to find malicious files and unusual activity. Also, as a direct benefit to your security program you will be able to eliminate noise in your environment, better tune your security tools, find areas of vulnerability and harden those areas, and enhance your security posture at your own pace.

Now, get hunting.

How-Next-Gen-DLP-is-helping-Code42-customers-today

How Next-Gen DLP Is Helping Code42 Customers Today

Since we announced the Code42 Next-Gen Data Loss Protection (Next-Gen DLP) product last month, it has struck a chord with the prospects and industry analysts we’ve spoken to. It’s exciting to see, because we believe this is an important step beyond conventional data loss prevention technology.

With the introduction of our new product, Code42 is rewriting the paradigm for legacy data loss prevention — shifting the focus from prevention to protection. The Code42 Next-Gen DLP solution offers a simpler, quicker way to secure an organization’s endpoint and cloud data from loss, leak, misuse and theft. Unlike traditional DLP, this cloud-native solution safeguards every version of every file without complex policies and without blocking user productivity or collaboration.

“ Code42 is rewriting the paradigm for legacy data loss prevention — shifting the focus from prevention to protection. ”

The positive reception is no surprise to us. Code42 customers have been using the capabilities that make up Code42 Next-Gen DLP to secure their businesses from data threats for a long time. Here are some of their stories:

Full control of IP — even when employees leave

One of our customers is a global advertising and communications firm. Like many professional services businesses, they need to ensure that proprietary information stays inside the organization to maintain its competitive position and client trust. Backed by Code42’s solution, their organization has visibility into where files are moving and who has access to what information, protecting the company from stolen data — especially when employees leave the company. “It can be a huge hit to your reputation if you lose data specific to a client or project,” the infrastructure project manager said. “Code42 gives us an unalterable snapshot of every single record — which means we’re always protected.”

GDPR compliance with mobile workers

Marel is another customer that has put Code42 to work inside their organization. They are a food processing machinery company based in Iceland. Like many global companies, Marel must comply with the EU’s new General Data Protection Regulation (GDPR), which strengthens personal data protection for business customers.

With Europe as one of its top markets, Marel needed a way to proactively protect and secure data stored on employee devices. “Our sales and service force use laptops heavily so they can work more efficiently while they’re traveling,” said Rob Janssen, manager of global infrastructure and QRC at Marel. “Likewise, managers also work from different locations. In the past their documents, which may contain sensitive data, were not always immediately synced back to the central storage solutions.”

Code42 continuously backs up every version of every file in real time, enabling Marel to comply with key GDPR data protection, recovery and notification requirements. Marel can easily meet GDPR rules requiring companies to report the extent of any personal data breach within 72 hours. The company can recover all files, including data that’s been deleted or maliciously encrypted. The ability to search through archives allows Janssen to determine what files were on a device at a given date and time, what users had access to those files, and what content, including personal information, was housed within those files.

“In the event of a data breach, Code42 helps us assess our exposure by giving us full visibility into every file on every laptop,” Janssen said. “We believe this is critical to complying with the GDPR. Of course, there is a strict process to be followed in these cases.”

A legal hold process with teeth

Another company we count as our custsomer is MacDonald-Miller. Located in the Pacific Northwest, they are a full-service, design-build mechanical contractor. MacDonald-Miller’s unique value proposition includes designing and blueprinting buildings, and then sending in a full team of plumbers, electricians and sheet metal workers to work on the build. With all that valuable design IP to protect, having an effective legal hold process is critical.

“Prior to Code42, our legal hold process was very vague,” said MacDonald-Miller Network Administrator Chad Tracy. “HR or IT had to find the user’s computer and manually try to search through documents, pictures and Excel files to see what may or may not have been on the user’s computer at the time of termination.”

Now, with Code42, MacDonald-Miller can use a portal to set up a legal hold for users and then monitor whether they’re copying documents on their personal drives.

“We had a pretty high-profile gentleman leaving the company,” said Eddie Anderson, a help desk support agent at MacDonald-Miller. “Through that portal, we were able to monitor his file history and found out 90 gigs of sales opportunities and other critical data had left to the network onto his external drive. Before Code42, there was no way of ever knowing that was happening.”

50,000 customers and counting

Code42 Next-Gen DLP is built from a combination of products that are part of the company’s award-winning data security portfolio, including Code42 Forensic File Search, File Exfiltration Detection, Legal Hold and Backup + Restore. Today, more than 50,000 customers are using capabilities that are part of the Code42 Next-Gen DLP solution.

If you’re a Code42 customer with a tale of success that you’d like to share, let us know. We look forward to including you in a future post!


Tips-From-the-Trenches--Enhancing-Phishing-Response-Investigations

Tips From the Trenches: Enhancing Phishing Response Investigations

In an earlier blog post, I explained how the Code42 security team is using security orchestration, automation and response (SOAR) tools to make our team more efficient. Today, I’d like to dive a little deeper and give you an example of how we’re combining a SOAR tool with the Code42 Forensic File Search API — part of the Code42 Next-Gen Data Loss Protection (DLP) product —  to streamline phishing response investigations.

A typical phishing response playbook — with a boost

Below is a screenshot of a relatively simple phishing response playbook that we created using Phantom (a SOAR tool) and the Code42 Forensic File Search API:

We based this playbook on a phishing template built into the Phantom solution. It includes many of the actions that would normally be applied as a response to a suspicious email — actions that investigate and geolocate IP addresses, and conduct reputation searches for IPs and domains. We added a couple of helper actions (“deproofpoint url” and “domain reputation”) to normalize URLs and assist with case management.

You may have noticed one unusual action. We added “hunt file” via the Code42 Forensic File Search API. If a suspicious email has an attachment, this action will search our entire environment by file hash for other copies of that attachment.

“ Combining the speed of Code42 Next-Gen DLP with the automation of SOAR tools can cut response times significantly. ”

What Code42 Next-Gen DLP can tell us

Applying Code42 Next-Gen DLP to our playbook shortens investigation time. The “hunt file” action allows us to quickly see if there are multiple copies of a malicious file in our environment. If that proves to be true, it is quick evidence that there may be a widespread email campaign against our users. On the other hand, the search may show that the file has a long internal history in file locations and on endpoints. This history would suggest that the file exists as part of normal operating procedure and that we may be dealing with a false alarm. Either way, together the Code42 Next-Gen DLP API and its investigation capability give us additional file context so our security team can make smarter, and more informed and confident decisions about what to do next.

Applying Code42 Next-Gen DLP to other threat investigations

This type of “hunt file” action does not need to be limited to investigating suspected phishing emails. In fact, it could be applied to any security event that involves a file — such as an anti-virus alert, an EDR alert or even IDS/IPS alerts that trigger on file events. Using Code42 Next-Gen DLP, security staff can determine in seconds where else that file exists in the environment and if any further action is necessary.

Combining the speed of Code42 Next-Gen DLP with the automation of SOAR tools can cut response times significantly. That’s something any security team can appreciate.

As always, happy threat hunting!

Security-must-enable-people-Code42-Blog

Security Must Enable People, Not Restrain Them

Do you ever think about why we secure things? Sure, we secure our software and data so that attackers can’t steal what’s valuable to us — but we also secure our environments so that we have the safety to do what we need to do in our lives without interference. For example, law enforcement tries to keep the streets safe so that civilians are free to travel and conduct their daily business relatively free of worry.

Now consider how everyday police work keeps streets safe. It starts with the assumption that most drivers aren’t criminals. Officers don’t stop and interrogate every pedestrian or driver about why they are out in public. That type of policing — with so much effort spent questioning law-abiding citizens — would not only miss spotting a lot of actual criminal behavior, it would certainly damage the culture of such a society.

There’s a lot we can learn about how to approach data security from that analogy. Much of cybersecurity today focuses on trying to control the end user in the name of protecting the end user. There are painful restrictions placed on how employees can use technology, what files they are able to access and how they can access them. Fundamentally, we’ve built environments that are very restrictive for staff and other users, and sometimes outright stifling to their work and creativity.

This is why we need to think about security in terms of enablement, and not just restraint.

“ Security should be about enabling people to get their work done with a reasonable amount of protection — not forcing them to act in ways preordained by security technologies. ”

Prevention by itself doesn’t work

What does that mean in practicality? Consider legacy data loss prevention (DLP) software as an example. With traditional DLP, organizations are forced to create policies to restrict how their staff and other users can use available technology and how they can share information and collaborate. When users step slightly “out of line,” they are interrogated or blocked. This happens often and is mostly unnecessary.

This prevention bias is, unfortunately, a situation largely created by the nature of traditional DLP products. These tools ship with little more than a scripting language for administrators to craft policies — lots and lots of policies, related to data access and how data is permitted to flow through the environment. And if organizations don’t have a crystal-clear understanding of how everyone in the organization uses applications and data (which they very rarely do), big problems arise. People are prevented from doing what they need to do to succeed at their jobs. Security should be about enabling people to get their work done with a reasonable amount of protection — not forcing them to act in ways preordained by security technologies.

This is especially not acceptable today, with so much data being stored, accessed and shared in cloud environments. Cloud services pose serious challenges for traditional DLP solutions because of their focus on prevention. Since so many legacy DLP products are not cloud native, they lose visibility into what is happening on cloud systems. Too often, the result is that people are blocked from accessing the cloud services they need. Once again, users are treated like potential criminals — and culture and productivity both suffer.

This is also a poor approach to security, in general. As security professionals who have been around a while know, end-user behavior should never be overridden by technology, because users will find ways to work around overbearing policies. It’s just the law of governing dynamics and it will rear its head when the needs of security technologies are placed above the needs of users.

Where’s the value for users?

There is one last area I’d like to go over where traditional DLP falls short when it comes to providing user enablement, and it’s an important one. Traditional DLP doesn’t provide any tangible value back to staff and others when they are working in an environment protected with legacy DLP. All they typically get are warning boxes and delays in getting their work done.

In sum, traditional DLP — and security technology in general — doesn’t just prevent bad things from happening, it also too often prevents users from doing what they need to do. They feel restrained like criminals for simply trying to do their jobs. In actuality, a very small percentage of users will ever turn malicious. So why should we make everyone else feel like they are doing something wrong? We shouldn’t.

Code42 Next-Gen DLP

At Code42 we believe it’s essential to assume the best intentions of staff and other users. That’s why Code42 Next-Gen Data Loss Prevention focuses on identifying malicious activity, rather than assuming malicious intent from everyone. It’s why the product is built cloud-native: organizations aren’t blind when it comes to protecting popular cloud services, and users aren’t blocked from working the way they want to work. It also doesn’t require policies that need to be created and forever managed that pigeonhole users to work certain ways.

Finally, we believe in providing value to the end user. It’s why we provide backup and restore capability in Code42 Next-Gen DLP. This fundamentally gives users the freedom to make mistakes and recover from them, and it gives them the knowledge that that their data is also protected and safe.

Because it doesn’t block or interrogate users every step of the way, we believe Code42 Next-Gen DLP helps users to be more secure and productive, and enhances organization culture. It also provides the security team the opportunity to be an enabler for their end users, not an obstacle.

In this sense, Code42 Next-Gen DLP is a lot like good police work. It gives its users the freedom they need to move about the world without every motion being questioned for potential malicious intent. This is a very powerful shift in the workplace paradigm; users should be empowered to behave and collaborate as they want without fear or worry regarding the security technology in place.

Security Panel: Filling Gaps in Your Security Stack (Video)

Keeping up with the constantly evolving cyber threatscape and plugging new security gaps is never-ending. We recently gathered several members of the Code42 security team for a panel discussion about how they manage threats and mitigate risks in today’s security environment. A recording of their conversation is now available on demand.

Below are a few highlights and sneak peeks from their conversation. You can check out the full panel discussion here.

Vendor risk management in the SaaS era

Maintaining visibility across a SaaS environment can be very challenging. One action our security team has taken to help address the situation is to put more focus on vendor risk management. Watch the short clip below to learn how they have defined a set of security controls for our vendor partners — and how the team holds vendors accountable to these standards.

Managing insider threats

Not all insider threats are malicious. People sometimes make mistakes and unexpected disruptions can put data at risk. In the next video clip, our security team talks about strategies to consider when implementing a comprehensive insider threat security plan that protects against both malicious and unintended threats.

Mitigating the risk of shadow IT with an application inventory

Gartner now predicts that by 2020, one-third of successful enterprise attacks will aim at unauthorized shadow IT applications. At Code42, we know that shadow IT is a big security risk. We also know it’s a risk that we’re not going to fully eliminate. Instead, our security team is focused on increasing our visibility to what applications — authorized or not — exist in our environment. In this next video clip, our panel shares how they search for instances of potentially compromised applications, so they can take quick and effective action.

How Code42 Forensic File Search fits in the security tech stack

With the wide range of security tools that already exist, where does Code42 Forensic File Search fit in a security stack? In the next video clip, our security team talks about how they use Code42 Forensic File Search in combination with other security tools and on its own to address unique use cases. The panel also talks about innovative ways that organizations can use Code42 Forensic File Search to fill existing security gaps, instantly expanding visibility and getting answers to questions like “Where is this file in our environment?”

This is just a sample of the insider knowledge shared during our panel discussion. Don’t miss your chance to hear the full conversation, on demand, right here: Code42 Security Panel: Choosing Tools to Fill Gaps in Your Security Stack

Code42 Next-Gen Data Loss Protection: What DLP Was Meant to Be

Malware and other external cyber threats get most of the headlines today. It’s not surprising, given the damage done to companies, industries and even countries by outside-in attacks on data. Despite that, insider threats — the risks of data being lost or stolen due to actions inside the company — are just as big a threat.

According to the 2018 Insider Threat Report by Cybersecurity Insiders, 90 percent of cybersecurity professionals feel vulnerable to insider threat. McKinsey’s Insider threat: The human element of cyberrisk reports that 50 percent of breaches involved insiders between 2012-2017.

“ By rethinking traditional DLP, you can know exactly where all your data is, how it is moving throughout your organization and when and how it leaves your organization — without complex policy management, lengthy deployments or blocks to your users’ productivity. ”

“The rise of insider threats is a significant threat to every business and one that is often overlooked,” said Jadee Hanson, Code42’s CISO. “While we all would like to think that employees’ intentions are good, we prepare for malicious actions taken by those from within our organizations. As external protection increases, we all should be concerned as to the influence external actors may have on those working for us and with us every day.”

Insider threats are a big deal, and traditional data loss prevention (DLP) solutions were developed to protect companies and their data from these internal events.

DLP hasn’t delivered

While traditional DLP solutions sound good in concept, most companies are only using a fraction of their capabilities. Security teams describe using these solutions as “painful.” Legacy DLP deployments take months or years, because proper setup requires an extensive data classification process, and refining DLP policies to fit unique users is complex and iterative. And after all that time, traditional DLP still blocks employees from getting their work done with rigid data restrictions that interfere with user productivity and collaboration. They also require on-site servers — counter to the growing business priority of moving solutions to the cloud.

Most importantly, legacy DLP solutions are focused on prevention. Business and security leaders now recognize that prevention alone is no longer enough. Mistakes happen, and data threats sometimes succeed. Being able to recover quickly from data loss incidents is just as important as trying to prevent them.

Rethink DLP

At Code42, we protect over 50,000 companies from internal threats to their data. This focus on protection has enabled us to see things differently, and develop an alternative to data loss prevention: data loss protection. We are excited to announce the new Code42 Next-Gen Data Loss Protection (Code42 Next-Gen DLP) solution that rethinks legacy DLP and protects data from loss without slowing down the business.

Code42 Next-Gen DLP is cloud-native and protects your cloud data as well as all of your endpoint data. It deploys in days instead of months, and provides a single, centralized view with five key capabilities:

  • Collection: Automatically collects and stores every version of every file across all endpoints, and indexes all file activity across endpoints and cloud. 
  • Monitoring: Helps identify file exfiltration, providing visibility into files being moved by users to external hard drives, or shared via cloud services, including Microsoft OneDrive and Google Drive.
  • Investigation: Helps quickly triage and prioritize data threats by searching file activity across all endpoints and cloud services in seconds, even when endpoints are offline; and rapidly retrieves actual files — one file, multiple files or all files on a device — to determine the sensitivity of data at risk.
  • Preservation: Allows configuration to retain files for any number of employees, for as long as the files are needed to satisfy data retention requirements related to compliance or litigation.
  • Recovery: Enables rapid retrieval of one file, multiple files or all files on a device even when the device is offline, or in the event files are deleted, corrupted or ransomed.

By rethinking traditional DLP, you can know exactly where all your data is, how it is moving throughout your organization and when and how it leaves your organization — without complex policy management, lengthy deployments or blocks to your users’ productivity. DLP can finally deliver on what it was originally created to do.

Architecting-IAM-for-AWS-with-Okta-Code42_Blog

Tips From the Trenches: Architecting IAM for AWS with Okta

In the last year, Code42 made the decision to more fully embrace a cloud-based strategy for our operations. We found that working with a cloud services provider opens up a world of possibilities that we could leverage to grow and enhance our product offerings. This is the story of how we implemented identity and access management (IAM) in our Amazon Web Services (AWS) environment.

IAM guiding principles

Once the decision was made to move forward with AWS, our developers were hungry to start testing the newly available services. Before they could start, we needed two things: an AWS account and a mechanism for them to log in. Standing up an account was easy. Implementing an IAM solution that met all of our security needs was more challenging. We were given the directive to architect and implement a solution that met the requirements of our developers, operations team and security team.

We started by agreeing on three guiding principles as we thought through our options:

1.) Production cloud access/administration credentials need to be separate from day-to-day user credentials. This was a requirement from our security team that aligns with existing production access patterns. Leveraging a separate user account (including two-factor authentication) for production access decreases the likelihood of the account being phished or hijacked. This secondary user account wouldn’t be used to access websites, email or for any other day-to-day activity. This wouldn’t make credential harvesting impossible, but it would reduce the likelihood of an attacker easily gaining production access by targeting a user. The attacker would need to adopt advanced compromise and recon methods, which would provide our security analysts extra time to detect the attack.

2.) There will be no local AWS users besides the enforced root user, who will have two-factor authentication; all users will come through Okta. Local AWS users have some significant limitations and become unwieldy as a company grows beyond a few small accounts. We were expecting to have dozens, if not hundreds, of unique accounts.  This could lead to our developers having a unique user in each of the AWS environments. These user accounts would each have their own password and two-factor authentication. In addition to a poor end-user experience, identity lifecycle management would become a daunting and manual task. Imagine logging into more than 100 AWS environments to check if a departing team member has an account. Even if we automated the process, it would still be a major headache.

Our other option was to provide developers with one local AWS user with permissions to assume role in the different accounts. This would be difficult to manage in its own way as we tried to map which users could connect to which accounts and with which permissions. Instead of being a lifecycle challenge for the IAM team, it would become a permissioning and access challenge.

Fortunately for us, Code42 has fully embraced Okta as our cloud identity broker. Employees are comfortable using the Okta portal and all users are required to enroll in two-factor authentication. We leverage Active Directory (AD) as a source of truth for Okta, which helps simplify user and permission management. By connecting Okta to each of our AWS accounts, users can leverage the same AD credentials across all AWS accounts — and we don’t need to make any changes to the existing IAM lifecycle process. Permissioning is still a challenge, but it can be managed centrally with our existing IAM systems. I will describe in greater detail exactly how we achieved this later in the post.

3.) Developers will have the flexibility to create their own service roles, but will be required to apply a “deny” policy, which limits access to key resources (CloudTrail, IAM, security, etc.).  As we were creating these principles, it became clear that the IAM team would not have the bandwidth to be the gatekeepers of all roles and policies (how access is granted in AWS). Developers would need to be empowered to create their own service roles, while we maintained control over the user access roles via Okta. Letting go of this oversight was very difficult. If not properly managed, it could have opened us up to the risk of malicious, over-permissioned or accidental modification of key security services.

Our initial solution to this problem was to create a “deny” policy that would prevent services and users from interacting with some key security services. For example, there should never be a need within an application or microservice to create a new IAM user or a new SAML provider. We notified all users that this deny policy must be attached to all roles created and we used an external system to report any roles that didn’t have this deny policy attached.

Recently, AWS released a new IAM feature called permission boundaries. The intent of permission boundaries is similar to that of our deny policy.  By using permission boundaries we can control the maximum permissions users can grant to the IAM roles that they create.  We are planning to roll this out in lieu of the deny policy in the very near future.

Example of a role found without the deny policy attached

Implementing Okta and AWS

When thinking through connecting Okta and AWS, we were presented with two very different architectural designs: hub and spoke and direct connect. The hub and spokedesign leverages an AWS landing account that is connected to Okta. Once logged in to this account, users can switch roles into other AWS accounts that are authorized. The direct connect design we implemented creates a new Okta application icon for each AWS account. Users access their accounts by visiting their Okta homepage and selecting the account they want to use.

Power users tend to prefer the hub and spoke model, as this allows them to quickly jump from account to account without re-logging in or grabbing a new API token. The more casual users prefer to have all accounts presented on one page. They aren’t swapping among accounts, and it isn’t fair to ask them to memorize account numbers (or even exact short names) so they can execute an assume role command. In addition to user experience, we considered how easy it was to automate management once a new account has been created. The two approaches each have merit, so we decided to implement both.

When a new account is created, it is bootstrapped to leverage the hub and spoke landing account. Automation can immediately start working with the account, and certain power users get the access they need without any IAM intervention. The IAM team can revisit the account when convenient and stand up the direct connection to Okta. New Okta features, currently in beta, will improve this direct connect process.

One final thing I would like to touch on is how we leverage the Okta launcher to get API tokens in AWS. One of the benefits of having local users in AWS is that each user is given their own API key. While this is a benefit to end users, these keys are very rarely rotated and could present a significant security risk (such as an accidental public GitHub upload). To address this, Okta has created a java applet that generates a temporary AWS API key. The repo can be found here. Like many other companies, we have created wrappers for this script to make things as easy as possible for our end users. After cloning the repo, a user can type the command “okta -e $ENV_NAME” and the script will reach out to Okta and generate an API key for that specific AWS account.  The users do need to know the exact environment name for this script to work, but most power users that need API access will have this information.

No matter where your company is on the path to leveraging a cloud service provider, IAM is a foundational component that needs to be in place for a successful and secure journey. If possible, try to leverage your existing technologies to help improve user experience and adoption. I hope the principles we shared here help you think through your own requirements.

Code42-Tips-from-the-Trenches-Searching-Files-in-the-Cloud

Tips From the Trenches: Searching Files in the Cloud

In a few of my previous blogs, I shared some examples of ways the Code42 security team uses Code42 Forensic File Search to find interesting files — macro-enabled Microsoft Office files, known malicious MD5 hashes and so on. Now that the search capabilities of our newest product have been extended beyond endpoints to include cloud services, such as Google Drive and Microsoft OneDrive, I’d like to look at how we’re using this broadened visibility in our investigations.

“ Because we can now use Code42 Forensic File Search to search for files and file activity across both endpoints and Google Drive, we can be more certain of the locations of sensitive files when we are doing file movement investigations. ”

Finding files – and tracking file movement – in the cloud

Code42 uses Google Drive as a cloud collaboration platform. Because we can now use Code42 Forensic File Search to search for files and file activity across both endpoints and Google Drive, we can be more certain of the locations of sensitive files when we are doing file movement investigations. We combine Code42 Forensic File Search with the Code42 File Exfiltration Detection solution to execute an advanced search — using a given MD5 hash — to find files that have been moved to a USB drive. This allows us to quickly build a complete picture of where a file exists in our environment — and how it may have moved from someone’s laptop to the cloud and back.

What files are shared externally?

Using the latest version of Code42 Forensic File Search, we can also search files based on their sharing status. For example, in a matter of a few seconds, we can search for all Google Drive documents that are shared with non-Code42 users. This shows us all documents that have been intentionally or inadvertently shared outside of the company. A deeper look at this list helps us identify any information that has been shared inappropriately. As with all searches within Code42 Forensic File Search, these investigations take only a few seconds to complete.

Here’s a hypothetical example: Let’s say the organization was pursuing an M&A opportunity and we wanted to make sure that confidential evaluation documents weren’t being shared improperly. We could use Code42 Forensic File Search to pull up a list of all documents shared externally. Should that list contain one of the confidential M&A evaluation documents, we could look more closely to determine if any inappropriate sharing occurred.

Continually finding new use cases

Code42’s ffs-tools repository on GitHub now includes several new searches that take advantage of our new cloud capabilities. You can find them all here.

Like most organizations, we use many cloud services to perform our day-to-day work. That’s why in the near future, we plan to expand the search capabilities of Code42 Forensic File Search across even more cloud services — giving you even greater visibility into the ideas and data your organization creates, no matter where they live and move.

Happy threat hunting!

7 Steps to Real-Time File Exfiltration Detection (Video)

This year’s Verizon Data Breach Investigations Report (DBIR) came out a few weeks ago, and — surprise, surprise — insider threat remains one of the biggest problems for enterprise data security. Looking at the DBIR, there are all the usual data exfiltration suspects: Most are so-called “inadvertent insiders” and a few are malicious insiders or malicious outsiders using stolen credentials. All of these attackers are acting with complete authorization, so their activities tend to fly under the radar — not tripping any of the traditional data security alarms — until it’s far too late. In fact, Verizon found that the vast majority (68 percent) of insider data loss events take a month or more for the organization to discover.

See file exfiltration in real-time

With Code42 deployed in your environment, you have a powerful tool for recognizing suspicious file exfiltration activity by authorized users. Code42’s File Exfiltration Detection solution enables you to set a threshold to alert you if users move more than a typical amount of files to an external location — whether copying them to a removable storage device or uploading them to a cloud service.

Code42’s File Exfiltration Detection solution in action

Here’s how File Exfiltration Detection could help you detect and respond to a disgruntled employee’s malicious attempt to steal your IP:

  1. Set the threshold. From the Code42 web console, set the File Exfiltration Detection threshold at 10 files or 50 MB.
  2. Alert! An email notification tells you that a user recently moved more than 200 MB of data to a third-party cloud service account, such as Microsoft OneDrive or Google Drive.
  3. Confirm. Clicking the email link brings you back to the Code42 web console, where you can see the details of the user’s suspicious activity. For example, you can view a historical perspective of the user’s cloud service activity to see that, yes, this is a highly unusual event.
  4. Investigate. Dig deeper by exporting a CSV file that shows detailed information on all the files included in this mass exfiltration. The CSV includes each file’s name and MD5 hash as well as details on where the files were moved and when.
  5. Unzip the zip. Let’s say the malicious insider attempted to hide photos and videos of proprietary manufacturing processes in a large, innocent-sounding zip file: “cat videos.zip.” You can use the Code42 Backup + Restore solution to download that zip file and reveal its true contents.
  6. Track the source. What if the malicious actor tried to hide his tracks by renaming and/or modifying the original files? Because File Exfiltration Detection provides the MD5 hash of all the exfiltrated files, you can use Code42 Forensic File Search to search your entire environment for the MD5 hashes. This lets you track the modified or renamed file back to its source.
  7. Take action — faster. Between the real-time alert from File Exfiltration Detection, the complete data visibility from Code42 Backup + Restore and the instant file search capabilities of Code42 Forensic File Search, this entire investigation took less than an hour. You know the event happened. You know who did it. And you have a huge head start on stopping the malicious actor before more sensitive data gets out of your control.
Facebook Twitter Google LinkedIn YouTube