42 Seconds with a Code42 Customer: Lehigh University

Code42 provides your business with a variety of data security benefits, including increased productivity, risk mitigation, streamlined user workflows, and more–all in a single product that’s been proven to ultimately save you money. While Code42 has a few primary use cases–backup and recovery, device migration, etc.–we’ve learned that our different customers use Code42 in different ways. To explore how customers use our product, we recently partnered with the talented team at creative agency Crash+Sues to create a series of animated videos featuring the voices and likenesses of actual Code42 users.

In our latest video, Naazer Ashraf, senior computing consultant at Lehigh University, explains why they rely on Code42 over sync and share products for data backup and restore. As one of the nation’s premier research universities, Lehigh’s faculty are known for their excellence in research. Obviously, data is extremely important (and valuable) to researchers, so imagine the reaction when one researcher deleted files from Google Drive to save space–and discovered that doing so wiped the files for 10 other researchers. Naazer tells the story in just 42 seconds. Check it out below.

Protect Your Data from Insider Threats with Code42

Code42 provides your business with a variety of benefits, including increased productivity, risk mitigation, streamlined user workflows, and more – all in a single product that’s been proven to ultimately save you money. Recently, Code42 launched Security Center, a new suite of tools to help you spot suspicious data use behaviors in your workforce – and respond to them if necessary. There’s a big reason why we added this feature – the facts show that 89 percent of corporate data loss involves the actions of an insider.

We recently partnered with the talented team at creative agency Crash+Sues to create a series of videos about the core features of Code42. This most recent video focuses on an all-too common scenario in which an employee decides to steal valuable data from his employer. Unfortunately for him, this company has Code42’s Security Center.

Take a look today for an illustration of how Code42 and Security Center can help keep your enterprise’s data safe from insider threats.

We Are All Surfing with the Phishes

We Are All Surfing with the Phishes

Phishing is in the news again – and for good reason. Last month, the story first came to light regarding a “megabreach” drop of 773 million email and password credentials. At first, this disclosure made a sizable splash. But as researchers dug in further, it turned out the dump of online credentials had been circulating online for some time, as independent security journalist Brian Krebs covered in his blog, KrebsonSecurity. Maybe the news wasn’t as big of a deal as we first thought? 

The news turned out to be bigger, in some ways. More large tranches of credentials continued to be uncovered in the days that followed. These new collections of credentials bring the total to 2.2 billion records of personal data made public. Even if the vast amount of these records is old, and by all estimates they probably are, this massive collection of information substantially increases the risk of phishing attacks that will target these accounts after they’d been pushed above ground.

“ According to the State of the Phish Report, since 2017 credential-based compromises increased 70 and 280 percent since 2016. ”

Phishing remains one of the most common and, unfortunately, successful attacks that target users – and it’s not just user endpoints that are in the sights of the bad guys. Often, phishers aim first at users as a way to get closer to something else they are seeking, perhaps information on corporate executives, business partners, or anything else they deem valuable. When an employee clicks on a link or opens a maliciously crafted attachment, his or her endpoint can then be compromised. That not only makes a user’s data at risk from compromise or destruction, such as through a ransomware attack, but attackers can also use that endpoint as a platform to dig deeper into other networks, accounts and cloud services. 

Consider ProofPoint’s most recent annual State of the Phish Report, which found that 83 percent of global information security respondents experienced phishing attacks in 2018. That’s up considerably from 76 percent in 2017. The good news is that about 60 percent saw an increase in employee detection after awareness training. According to the State of the Phish Report, since 2017 credential-based compromises increased 70 and 280 percent since 2016. 

Unfortunately, the report also found that data loss from phishing attacks has tripled since 2018. Tripled.

“ Someone is going to click something bad, and antimalware defenses will miss it. ”

This latest uncovering of credentials is a good reminder as to why organizations always have to have their defenses primed against phishing attacks. These defenses should be layered, such as to include security awareness training, antispam filtering, and endpoint and gateway antimalware, along with comprehensive data protection, backup and recovery capabilities for when needed, such as following a malware infection or successful ransomware attack. 

However, even with all of those controls in place, the reality is that some phishing attacks are going to be successful. Someone is going to click something bad, and antimalware defenses will miss it. The organization needs to be able to investigate successful phishing attacks. This includes investigating and understanding the location of IP addresses, gaining insights into the reputation of internet domains and IP addresses, and establishing workflows to properly manage the case. These investigations can help your organization protect itself by blocking malicious mail and traffic from those addresses, notify domain owners of bad activity, and even assist law enforcement. 

When you find a file that is suspected of being malware, you can then search across the organization for that file. Chances are that, if it was a malicious file in the phishing attack, it may have targeted many people in the organization. Nathan Hunstad details how, in his post Tips From the Trenches: Enhancing Phishing Response Investigations, our hunt file capability integrates with security orchestration, automation and response (SOAR) tools to rapidly identify suspicious files across the organization and swiftly reduce risk. 

There’s another lesson to be learned here, one that is a good reminder for your organization and your staff: We are all on the dark web, where much of its information is about us – all of the information that has been hacked over the years; such as financial information, Social Security numbers, credit reports, background checks, medical information, employment files, and, of course, emails and logon credentials, is likely to be found there. 

That’s why, while much of the information in this trove of credential information that has surfaced from the depths of the web turned out to be old information, it doesn’t mean there aren’t lessons here that need reminding. For instance, it is critical to assume the increased risks as a result of all of the information that is out there and how it can be used in phishing attacks.

Tips From the Trenches: Using Identity and Access Management to Increase Organizational Efficiencies and Reduce Risk

Tips From the Trenches: Using Identity and Access Management to Increase Efficiencies and Reduce Risk

As a security company, it’s imperative that we uphold high standards in every aspect of our security program. One of the most important and foundational of these areas is our Identity and Access Management (IAM) program. As part of Code42’s approach to this program, we have identified guiding principles that have a strong focus on automation. Below is an outline of our journey.

IAM guiding principles

Every IAM program should have guiding principles with HR, IT and security. Here are a few of ours:

1. HR would become the source of truth (SoT) for all identity lifecycle events, ranging from provisioning to de-provisioning.

The initial focus was to automate the provisioning and de-provisioning process, then address the more complex transfer scenario at a later phase. HR would trigger account provisioning when an employee or contractor was brought onboard, and shut off access as workers left the company. Further, the HR system would become authoritative for the majority of identity related attributes for our employees and contractors. This allowed us to automatically flow updates made to an individual’s HR record (e.g. changes in a job title or manager) to downstream connected systems that previously required a Help Desk ticket and manual updates.

2. Our objectives would not be met without data accuracy and integrity.

In-scope identity stores such as Active Directory (AD) and the physical access badge system had to be cleansed of legacy (stale) and duplicate user accounts before they were allowed to be onboarded into the new identity management process. Any user account that could not be matched or reconciled to a record in the SoT system was remediated. Although a rather laborious exercise, this was unquestionably worth it in order to maintain data accuracy.

3. Integrate with existing identity infrastructure wherever possible.

We used AD as our centralized Enterprise Directory, which continues to function as the bridge between on-prem and cloud identity broker, Okta. Integrating with AD was of crucial importance as this would allow us to centrally manage access to both on-premise and cloud based applications. When a worker leaves the company, all we need to do is ensure the user account is disabled in AD, which in turn disables the person’s access in Okta.

Once we had agreement on our guiding principles, it was time to start the design and implementation phase. We built our solution using Microsoft’s Identity Manager (MIM) because our IAM team had used Microsoft’s provisioning and synchronization engine in the past and found it to be easy to configure with many built-in connectors and extendable via .NET.  

IAM implementation phases

Identity in every organization is managed through a lifecycle. Below are two of the identity phases we have worked through and the solutions we built for our organization:

1. Automating provisioning and deprovisioning is key, but can also cause challenges.

One challenge we had was a lag between a new employee starting and employee records being populated in systems that act as the source of truth. This doesn’t allow lead time to provision a user account and grant access for the incoming worker. We solved this obstacle by creating an intermediate “SoT identity” database that mirrors the data we receive from our HR system. From there, we were able to write a simple script that ties to our service desk and creates the necessary database entry.

The next challenge was to automate the termination scenario. Similar to most companies, our HR systems maintain the user record long past an employee’s departure date for compliance and other reasons. Despite this, we needed a way to decommission the user immediately at time of departure. For this, we developed a simple Web Portal that allows our Helpdesk and HR partners to trigger termination. Once a user is flagged for termination in the Portal, the user’s access is automatically disabled by the identity management system. De-provisioning misses are a thing of the past!

2. Re-design and improve the access review process.

This phase aims to replace our current manual, spreadsheet-based, quarterly access certification process with a streamlined process using the built-in review engine in the identity management tool.

Implementing IAM at Code42 has been an awesome experience; and with the impending launch of the request portal, this year will be even more exciting! No matter how far along you are in your IAM implementation journey, I hope the concepts shared here help you along the way.

Code42 Security Team Talks Predictions for 2019

The Code42 Security Team Talks Predictions for 2019

As companies plan their data security strategy for 2019, they’re faced with a particularly challenging set of unknowns. On top of shifts in the market and political uncertainties, businesses must operate in an ever-changing threat landscape as they make decisions about how best to protect their most valuable asset: their data.

We gathered members of the Code42 security team for a roundtable discussion to get their cybersecurity predictions for 2019. The upshot: Employee behavior and need for collaboration will challenge security teams as they face an increasingly hostile threat landscape and tightened regulations.

Employee behavior and corporate practices will be front-and-center for data security strategies.

Chrysa Freeman, senior analyst, security awareness and training: Security awareness isn’t always a hot topic, but we’re going to see a lot of change in this space in 2019. Annual compliance trainings and e-learnings will be replaced by interactive, short, frequent trainings to increase employee engagement and retention of the content. Companies will start using humor instead of the somber, scare-your-socks-off tone of years past because they’ll recognize they’ll be more successful when trainings are engaging and to the point.

Jeremy Thimmesch, senior information security analyst: We will continue to see organizations struggling with the basics: patching, asset management, access control and data management. Vulnerabilities in operating systems, applications and infrastructure will go unpatched due to IT constraints, leadership priorities, and poorly implemented vulnerability and risk management programs. As a result, we will continue to see breaches from the usual suspects: phishing, lack of user awareness and poor patch management.

“ Employee behavior and need for collaboration will challenge security teams as they face an increasingly hostile threat landscape and tightened regulations. ”

Use of two-factor authentication and password managers will increase.

Jeff Holschuh, manager of identity: 2019 will be the year of two-factor authentication for consumer websites. With the huge number of compromised username/password combinations currently for sale on the dark web, the number of banks and e-commerce sites that allow a second authentication factor will increase substantially.

Chris Way, senior security engineer: As breaches continue to become more commonplace, more users will embrace password managers. They are timesavers when the alternative is having to manually update your passwords across the board

The regulatory environment will tighten, but companies may not change anything.

Chris Ulrich, senior information security analyst: 2019 will be the beginning of the “Data Responsibility” movement, partly because of GDPR and partly because people are tired of having their data spilled all over the Internet with little to no recourse for the responsible party. Most breaches are a result of vulnerability and carelessness. I’m always hearing people ask, “What could security have done better?” But not once have I heard, “Why did we have this data in the first place?”

Nathan Hunstad, director of security: I’m a bit more pessimistic: nothing will change. Systems will continue to go unpatched; and as a result, avoidable exploits will not be avoided. People will click on links. There will be at least one breach with more than 100 million records lost. GDPR will increase the fines for some of these breaches, but not enough to motivate companies to approach security differently; the recent fine of Google for €50 million is pocket change to such a company. Instead, we will see companies just leave the EU market to avoid regulatory burden.

Cyber warfare will escalate and create more mistrust in our digital world.

Andrew Moravec, security architect: 2019 will be the year when cyber warfare moves further out of the shadows. We’ll see nations actively spying on foreign citizens and bugging officials and executives via their own gadgets and technology. We’ll see foreign leaders and states use hacks and cyberattacks against global corporations as a form of extortion for political influence. With successful attacks, we’ll see bravado — “Big deal, what are you going to do about it?” — and  fewer denials.

There will also be a resurgence of troubled and misguided attempts to regulate and monitor social networks and calls to ban VPNs and limit civilian cryptography, which is currently the case in Australia.

You will see a cable or DSL network go down for a prolonged period of time, perhaps for days. It will be unclear if this is an attack or the result of poor management or overwhelmed staff. The result will be a conversation on how dependent we are on computer networks for day-to-day life, and just who do we trust with our link to the world.

Despite the increasing challenges, security teams will need to allow employee collaboration—and be collaborators themselves.

Michelle Killian, senior manager of security and risk compliance: I’d love to see security get better at real information sharing and collaboration in 2019. The DevOps community is awesome at sharing their failures as much as their wins, which allows the community to benefit. Security is, understandably, a bit more tight-lipped about our failures. But I think we’re only hurting ourselves and making adversaries out of what should be great security partners.

Byron Enos, senior security engineer: In 2019, security teams will be forced to become more agile to keep up with business demands. They will start moving away from big gates and bars, and instead gravitate towards automation and providing “security as a service” to internal business partners.

Code42-Time-to-Bring-Shadow-IT-Into-the-Light

It’s Time to Bring Shadow IT Into the Light

Mention shadow IT to most enterprise IT and security professionals, and you are likely to elicit a frown. It’s understandable. At its worse, shadow IT, such as an unsanctioned server or cloud storage service, operated (shall we say, less than ideally) by business managers, can place systems and data at serious risk.

However, there’s another side to shadow IT. Shadow IT allows staff to choose their cloud apps and services, which helps improve productivity and drive innovation. Not to mention increase employee happiness. 

Still, shadow IT can and does pose significant risks to the organization, such as with the poorly managed server we mentioned. When users decide what cloud services they’re going to use themselves or how to collaborate with co-workers, IT loses visibility into these systems and data. Ultimately, what this means is enterprise data is scattered across multiple cloud services, and visibility into vitally important data is lost. Not good.

“ According to Gartner, shadow IT comprises roughly 40 percent of enterprise technology purchases. That is, business leaders decide, manage, and control nearly 40 percent of technology purchases. ”

After all, if IT doesn’t know a technology is in place, then it’s impossible to secure it or the data it holds. And it’s impossible to know who is accessing that data and why. 

Regardless, shadow IT is a permanent part of the enterprise landscape and IT and security teams need to adapt. According to Gartner, shadow IT comprises roughly 40 percent of enterprise technology purchases. That is, business leaders decide, manage, and control nearly 40 percent of technology purchases.

That much technology and the data it holds can’t remain to lurk in the shadows. 

We know why business users are so quick to embrace shadow IT. It can often take weeks or months for IT departments to deploy new servers or applications. But with only a credit card, business users can access cloud applications and services within minutes. 

The question becomes, how do IT teams harness that innovation from their staff, while also ensuring their data is adequately secured and protected?

They need to bring it out of the shadows. 

The first step is to assess what shadow applications and cloud services are in place so that there is an accurate baseline of the cloud applications and services in use.

There are a number of ways to achieve this, and the best method depends on the nature and size of your organization. You could start with a simple survey of the business groups to collect information on the applications they are using. Or you could begin by monitoring traffic and endpoints to see what applications are in use and where data is traveling. 

However you establish your baseline, the important thing is to get started. 

“ Now that you’ve identified shadow IT, whether it be cloud apps, storage or platforms, the goal shouldn’t be to reprimand or shut down these services. It should be to ensure the services that the staff has chosen are correctly managed and secured. ”

Now that you’ve identified shadow IT, whether it be cloud apps, storage or platforms, the goal shouldn’t be to reprimand or shut down these services. It should be to ensure the services that the staff has chosen are correctly managed and secured so that IT and security teams have adequate data visibility. That is, they can see what data is flowing to these services and ensure access to that data is controlled, and that the data is protected and recoverable. 

This way, when that poorly managed server is uncovered, it can be an opportunity for an educational moment. Staff can be made aware (or reminded) of how vital patching and systems updates and properly monitoring systems and data are to the security of the organization. And rather than taking the server down, IT can then monitor and properly manage it. The same is true for all cloud services and applications. Rather than trying to ban them all, manage them. 

One way to manage them is to use a solution like Code42 Next-Gen Data Loss Protection. It was built to collect information about every version of every file, giving businesses full visibility to where data lives and moves — from endpoints to the cloud. With that kind of oversight, security teams can monitor, investigate, preserve and ultimately recover their valuable IP without having to block data use or rely on the restrictive policies that are part of traditional data loss prevention (DLP). Instead of security teams working with limited visibility to a subset of files (when they need to gauge the risk of all their data) or hindering employee productivity, next-gen DLP helps them foster open, collaborative work environments.  

When shadow IT is managed in this way, the organization derives some distinct advantages. IT and security teams become better business enablers and support the needs of staff and business users. They become a trusted advisor and facilitator that helps the organization go forward securely.

Shape Technologies Group Relies on Next-Gen DLP

Shape Technologies Group Relies on Code42 Next-Gen Data Loss Protection to Safeguard Data

As industry leaders seek to consolidate their positioning in the global marketplace, mergers and acquisition activity continues to surge. In 2017, companies announced more than 50,000 transactions worldwide for a total value of $3.5 trillion.  However, only one out of five M&As achieves its potential value. 

One culprit for lackluster M&A results? Losing valuable IP—much of which lives on employee endpoints—from the sell-side company during the acquisition process. Much of an acquisition target’s value lies in its IP. In order to get the full value of an acquisition, buy-side organizations must identify, locate, secure and safely migrate the IP of the sell-side company. And it has to happen fast.

IT implications for growth

One company that’s garnering top value from the IP of its merger and acquisitions is SHAPE. You likely encounter the results of SHAPE’s waterjet cutting solutions every day. Its technologies are integral in many industries such as auto, aerospace, food, mobile and fabrication. Since 1974, the Kent, Washington-based company has delivered more than 13,000 waterjet systems to customers in more than 100 countries.

The global company employs 1,400 workers in more than 20 offices in North and South America, Asia and Europe. The organization’s goal is to double in size over the next four years to reach $1 billion. In addition to strong organic growth, one of SHAPE’s growth strategies is acquisitions — many of which are smaller companies and some are overseas.

With such aggressive growth targets come data security and IT challenges. 

SHAPE turned to Code42 for its Next-Gen Data Loss Protection (DLP) to help protect precious IP during M&As and against loss or theft during employee departures.

“ Some of the companies SHAPE acquires don’t have sophisticated security and IT programs, so SHAPE’s IT team must quickly get their data secured, integrated with their core technologies and aligned with IT standards. ”

Protecting sell-side company data

A large part of the value proposition when SHAPE buys a company is the IP that comes along with it. Unfortunately, that data is easily put at risk by employee actions and departures. That’s why it’s critical to protect the files and information on the sell-side company’s devices. The IT department at SHAPE understands the reality of this risk and proactively takes steps to protect the organization’s IP and secure the data. 

“Our initial goal is to get the data captured and backed up,” says Jeff Zuniga, director of IT operations. “Some people have taken it upon themselves to delete files thinking they’re helping by cleaning things up. Once we get the data protected, we are able to manage it and consume it as needed.”

Some of the companies SHAPE acquires don’t have sophisticated security and IT programs, so SHAPE’s IT team must quickly get their data secured, integrated with their core technologies and aligned with IT standards.

“We needed a quick way to be able to start collecting the data that resided on their machines,” says Zuniga. “A lot of them ran on a shoestring budget and workstation backups weren’t part of their vocabulary.”

Safeguarding data during consolidation

Organizational consolidation often accompanies acquisitions — and often includes employee departures. To monitor IP and determine whether there’s any suspicious file movement or deletion during this process, SHAPE is using Code42. 

“Being able to make sure we’re protecting our IP, that it’s not walking out the door, is important,” says Zuniga.

At a company that’s the innovator in its field, IP carries a premium—and without the right tools, it could be vulnerable to insider threats.

“We have a lot of IP like our drawings, sensitive information like cost of goods, where we purchase things, vendor lists,” says Zuniga. “We’ve been running reports and looking at users who have copied their local drives. We have to sort through and see if they’re personal files or does it contain IP.”

As it maintains its growth trajectory, SHAPE’s strategic approach to IT will continue to serve it well. And Code42 Next-Gen Data Loss Protection capabilities like data collection and monitoring, will help the industry leader safeguard its valuable IP—that of its acquisition targets and the homegrown ideas that have made it the industry leader for more than four decades.

Using-Delayed-Client-Updates-to-Test-the-Code42-App

Product Spotlight: Using Delayed Client Updates to Test the Code42 App

One of the benefits of selecting a Code42 cloud deployment is that that you don’t need to manage software upgrades. Code42 manages all infrastructure, and the Code42 app installed on endpoints is automatically updated when new versions are released. This process ensures your organization always has the latest security updates and newest functionality.

However, some customers have told us their change management process requires them to test new versions of the Code42 app with internal groups prior to distributing to the entire organization. Today we’re excited to announce new functionality that allows you to do just that.

With the new delayed client updates functionality, Code42 cloud deployment customers have up to thirty days to test new versions of the Code42 app before all endpoints are updated. In most cases, you will be notified one week prior to the release date so that you can prepare for the start of the testing period.

How to use delayed client updates

First, you must opt into this functionality by setting a global delay for all Code42 app updates. This delay can be set for up to thirty days. The selected global delay becomes the date on which all endpoints will receive a new version of the Code42 app after its release. Customers who do not set a global delay will continue to receive new versions of the Code42 app automatically on release date.

Once you’ve selected your global delay, you can specify organizations as “exceptions” to the delay date. These will become your test organizations. For example, if you’ve set your global delay to the thirty day maximum, you can arrange for the IT organization to receive the update on the general availability date, and for the marketing organization to receive the new app ten days after the release. This allows for sequenced testing with multiple test groups. If needed, you can also deploy to individual devices for targeted testing.

Once you’ve completed any desired testing, all Code42 apps will update automatically according to your global delay setting.

We hope this process allows you to follow your established change management process while still benefiting from the automatic updates that come with a cloud deployment. Happy testing!




Code42-Dont-Let-Your-Security-Be-Blinded-by-Cloud-Complexity

Don’t Let Your Security Be Blinded by Cloud Complexity

It’s incredible how complex today’s IT environments have become. Among the central promises of cloud computing were simplified management and security. However, almost paradoxically, it is the ease of cloud deployment and use that led to an explosion of adoption that has presented a significant challenge for security teams.

The challenge isn’t necessarily just the number of cloud services in use but how scattered an organization’s data becomes across these services. It doesn’t seem too long ago when nearly all enterprise data was stored on local drives or shared storage in a data center. No more. With the rise in popularity of cloud services, files are likely to be stored on user endpoints as well as across a number of cloud services, including Box, Google Drive, OneDrive or collaboration platforms like Slack and others.

“ Unfortunately, the rise in IT management complexity will continue to make for rising security challenges. ”

To add to the complexity, the research firm Gartner estimates that more than 80 percent of enterprise data is unstructured data, and most of that data is expected to be stored in cloud systems.

And, while this may be surprising — because it feels like cloud adoption has been ongoing for some time now — the reality is that the move to the cloud is still in its early stages. According to the market research firm Stratistics MRC, the global cloud storage market is expected to grow from its $19 billion market size in 2015 to more than $113 billion by 2022. That’s an annual growth rate of roughly 29 percent.

All of this compromises the ability of security teams to peer into the movement and location of the organization’s sensitive data. Security teams simply cannot monitor organizational data for changes or see where it travels. Security investigations become harrowing and require complex workflows with multiple tools to attempt to analyze potential security events — and forget about knowing for certain whether specific data files are backed up and recoverable.

These are questions security teams need to be able to answer — not only for security and regulatory compliance demands but to also ensure data availability for business.

Unfortunately, the rise in IT management complexity will continue to make for rising security challenges. And, let’s be honest, security technologies have not always made the jobs for security professionals easier.

Consider how difficult most security tools are to set up and manage. This is unfortunately the case when it comes to most prevailing security technologies: web application firewalls, intrusion detection and prevention systems, encryption and so on. The same is true for traditional enterprise DLP.

The more complex the environment, the more challenging security becomes, and the more seamless to the workflow enterprise security managers must be.

This is why we made Code42 Next-Gen DLP straightforward to connect to cloud services and easy to use. Rather than being blinded by complexity, security teams can see where files are moving to and quickly scrutinize if something needs to be investigated. It provides a comprehensive view of file activity across both endpoints and cloud services.

Code42 Next-Gen DLP is designed to simplify investigatory workflows, shorten incident response time and help to reduce security and compliance risks.

In order to effectively manage cloud complexity, security teams need to be able to simplify their workflows — and do so regardless of the cloud services employees choose to use. After all, our IT environments aren’t going to get any easier to manage any time soon. We are creating more files, which are being stored in more cloud services, than ever before — and security threats and regulatory demands aren’t going to go away either. Your best defense is to ensure you have the necessary visibility to manage and secure your user data no matter where that data is being used and stored.

Code42-Tips-From-the-Trenches-Red-Teams-and-Blue-Teams

Tips From the Trenches: Red Teams and Blue Teams

In my most recent post, I wrote about the important role proactive threat hunting plays in a mature security program. Equally important to a well-designed program and closely related to hunting for threats is having a robust red team testing plan. Having a creative and dynamic red team in place helps to “sharpen the knife” and ensure that your security tools are correctly configured to do what they are supposed to do — which is to detect malicious activity before it has advanced too far in your environment.

“ It is much more challenging to build and maintain defensible systems than infiltrate them. This is one of the reasons why red team exercises are so important. ”

Red teams and blue teams

A red team’s mandate can range from assessing the security of either an application, an IT infrastructure or even a physical environment. For this post, I am referring specifically to general infrastructure testing, where the goal is to gain access to sensitive data by (almost) any means necessary, evaluate how far an attacker can go, and determine whether your security tools can detect or protect against the malicious actions. The red team attackers will approach the environment as if they are an outside attacker.

While your red team assumes the role of the attacker, your blue team acts the defender. It’s the blue team that deploys and manages the enterprise’s defenses. While the red team performs their “attack” exercises, there are many things your blue team can learn about the effectiveness of your company’s defenses — where the shortfalls are and where the most important changes need to be made.

Defining success

Before conducting a red team test, it helps to decide on a few definitions:

1. Define your targets: Without specifying what the critical assets are in your environment — and therefore what actual data an actual attacker would try to steal — your testing efforts will not be as valuable as they could be. Time and resources are always limited, so make sure your red team attempts to gain access to the most valuable data in your organization. This will provide you the greatest insights and biggest benefits when it comes to increasing defensive capabilities.

2. Define the scope: Along with identifying the data targets, it is essential to define the scope of the test. Are production systems fair game or will testing only be done against non-production systems? Is the social engineering of employees allowed? Are real-world malware, rootkits or remote access trojans permitted? Clearly specifying the scope is always important so that there aren’t misunderstandings later on.

How tightly you scope the exercise includes tradeoffs. Looser restrictions make for a more realistic test. No attacker will play by rules. They will try to breach your data using any means necessary. However, opening up production systems to the red team exercise could interrupt key business processes. Every organization has a different risk tolerance for these tests. I believe that the more realistic the red team test is, the more valuable the findings will be for your company.

Once you define your scope, make sure the appropriate stakeholders are notified, but not everybody! Telegraphing the test ahead of time won’t lead to realistic results.

3. Define the rules of engagement: With the scope of the test and data targets well defined, both the red team and the blue team should have a clear understanding of the rules for the exercise. For example, if production systems are in scope, should the defenders treat alarms differently if they positively identify an activity as part of the test? What are the criteria for containment, isolation and remediation for red team actions? As with scope, the more realistic you can make the rules, the more accurate the test will be, but at the potential cost of increased business interruption.

Making final preparations

Don’t end the test too quickly. A real attacker who targets your organization may spend weeks or even months performing reconnaissance, testing your systems and gathering information about your environment before they strike. A one-day red team engagement won’t be able to replicate such a determined attacker. Giving the red team the time and resources to mount a realistic attack will make for more meaningful results.

It’s also important to precisely define what success means. Often a red team attacker will gain access to targeted resources. This should not be seen as a failure on the part of the blue team. Instead, success should be defined as the red team identifying gaps and areas where the organization can improve security defenses and response processes — ultimately removing unneeded access to systems that attackers could abuse. A test that ends too early because the attacker was “caught,” doesn’t provide much in the way of meaningful insights into your security posture. An excellent red team test is a test that is comprehensive.

It’s important to note that defenders have the harder job, as the countless daily news stories about breaches illustrate. It is much more challenging to build and maintain defensible systems than infiltrate them. This is one of the reasons why red team exercises are so important.

Completing the test

Once the test is complete, the red team should share the strategies they used to compromise systems, and gain access or evade detection with the blue team. Of course, the red team should be documenting all of this during the test. Armed with this information, the blue team can determine how to harden the environment and create a bigger challenge for the red team during the next exercise.

We have a fantastic red team here at Code42. The team has conducted multiple tests of our infrastructure, and we have always found the results to be incredibly valuable. Any organization, no matter the size, can gain much more than they risk by performing red team testing.

As always, happy threat hunting!

Best of the Code42 Blog November 2018

The Best of the Blog: December 2018

Catch up on the best stories from the Code42 blog that you might have missed in December. Here’s a roundup of the highlights.

Tips From the Trenches: Threat-Hunting Weapons: Defensive tools are essential for any cybersecurity team. But to take your security to the next level, it’s time to go on offense. Learn how proactive threat hunting can improve the effectiveness of any security program.

It’s Time to Rethink DLP: Three of the five most common data loss incidents involve insiders. Today’s idea-focused organizations need to keep their valuable IP safe, but the prevention-only focus of their legacy DLP solutions no longer matches their needs. Learn how Code42 Next-Gen DLP protects all data without hampering employee productivity.

Product Spotlight: Saved Searches: Most organizations have “crown jewels” —data that makes or breaks the business. Learn how to quickly — and repeatedly — find where these crucial files exist in your organization with the new saved searches feature of Code42 Next-Gen DLP

2018: The Year in Review at Code42: It has been an eventful year for Code42. Catch up on all the new ways Code42 can help you protect your data.

The-Year-in-Review-at-Code42

2018: The Year in Review at Code42

The end of the year is always a great time for reflection. The last 12 months have been especially eventful for Code42. This year, the Code42 product grew and evolved in significant ways. We made product enhancements and introduced more tools to gather actionable intelligence about data risk. Most importantly, we added capabilities that paved the way for our biggest product yet: Code42 Next-Gen Data Loss Protection. We couldn’t have brought this exciting new solution to life without the foundational features unveiled throughout 2018. Here’s a look back at the highlights.

Code42 Forensic File Search

In April, we launched Code42 Forensic File Search, which now forms the core investigation capabilities of Code42 Next-Gen Data Loss Protection. By collecting file metadata and events from endpoints and making them searchable via the cloud, Code42 Forensic File Search enables security teams to get comprehensive answers to challenging security questions in seconds versus days or weeks.

Code42 Forensic File Search expands into cloud services

Our September release included several more enhancements, both big and small. We extended the capabilities of Code42 Forensic File Search so security teams can search for files by SHA256 hash and across cloud services, including Microsoft OneDrive and Google Drive. These capabilities truly unified and broadened the investigation capabilities of Code42 Next-Gen Data Loss Protection, providing full visibility to where corporate files live and move.

With the ability to search file activity in the cloud, IT and security teams are now able to more quickly see what files are shared and with whom; how and when files are added to cloud services; and what files a departing employee accessed, shared, downloaded or transferred before resigning. To further strengthen this capability in 2019, we’ll continue to expand across other cloud services.

With our November release, we added even more improvements to Code42’s investigation and monitoring capabilities. File Exfiltration Detection support was introduced for Mac devices, which now detects files being sent to Slack, FileZilla, FTP and cURL. To make it even easier to keep track of the most critical files, we also rolled out the ability to save search queries.

Code42 customers embraced cloud architectures

Meanwhile, customers told us their cloud strategies were changing. Companies who had originally chosen on-premises and hybrid deployment models were ready to fully embrace the benefits of cloud. We set out to deliver a secure and seamless way for our customers to move to cloud without needing to re-deploy or lose their historical data. This fall, we were proud to deliver a migration path that enables customers to deploy in the cloud in a couple of hours, without any user downtime or data loss. We’ve already had many customers upgrade to the cloud in order to eliminate on-premises hardware and take advantage of all the newest Code42 functionality. If you are a Code42 customer interested in moving to a cloud deployment, contact your CSM today to learn more.

“ Code42 Next-Gen Data Loss Protection takes a fundamentally different approach to protecting corporate data. ”

Next-gen data loss protection

In October, we brought all of our core capabilities together into a single holistic solution and unveiled Code42 Next-Gen Data Loss Protection. We heard from our customers and the market that while traditional data loss prevention (DLP) solutions sound good in concept, they’re failing to live up to their potential in several key ways. Most companies are only using a fraction of the capabilities of their traditional DLP solutions. Security teams describe using traditional DLP as “painful.” Deployments of these tools can take months or years, because proper setup requires an extensive data classification process, and refining DLP policies to fit unique users is complex and iterative. To make the situation even more challenging, traditional DLP blocks employees from getting their work done with rigid data restrictions that interfere with productivity and collaboration.

Most importantly, traditional DLP solutions are narrowly focused on prevention — and business and security leaders now recognize that prevention alone does not work. Data loss will happen. Being able to protect a business from data loss and quickly recover from an incident is more important than the constant efforts needed to prevent an attack from happening — especially when, in the end, prevention fails.

Code42 Next-Gen Data Loss Protection takes a fundamentally different approach to protecting corporate data. Unlike traditional DLP, it does not require policies, which has multiple benefits. The solution deploys in days instead of months; it is not resource-intensive to manage; and it doesn’t burden administrators with false positives. Most importantly, it doesn’t drain user productivity with rigid restrictions on data use.

Code42 Next-Gen Data Loss Protection is cloud-native and preserves every version of every file on every endpoint, forever. It monitors file activity across all endpoints and an ever-expanding list of cloud services. As a result, it provides unified visibility to where files live and move as well as access to the contents of files involved in data security investigations. Code42 Next-Gen Data Loss Protection preserves current and historical endpoint files for rapid content retrieval and investigation, as well as to help meet regulatory requirements.

To achieve these benefits, Code42 Next-Gen DLP leverages five key capabilities:

  • Collection: Automatically collects and stores every version of every file across all endpoints, and indexes all file activity across endpoints and cloud. 
  • Monitoring: Helps identify file exfiltration, providing visibility into files being moved by users to external hard drives, or shared via cloud services, including Microsoft OneDrive and Google Drive.
  • Investigation: Helps quickly triage and prioritize data threats by searching file activity across all endpoints and cloud services in seconds, even when endpoints are offline; and rapidly retrieves actual files — one file, multiple files or all files on a device — to determine the sensitivity of data at risk.
  • Preservation: Allows configuration to retain files for any number of employees, for as long as the files are needed to satisfy data retention requirements related to compliance or litigation.
  • Recovery: Enables rapid retrieval of one file, multiple files or all files on a device even when the device is offline, or in the event files are deleted, corrupted or ransomed.

It’s been a big year for Code42, and with the launch of Code42 Next-Gen Data Loss Protection, next year will be even bigger. Thanks for taking this trip down memory lane with us and see you in 2019!