42 Seconds with a Code42 Customer: Lehigh University

Code42 provides your business with a variety of data security benefits, including increased productivity, risk mitigation, streamlined user workflows, and more–all in a single product that’s been proven to ultimately save you money. While Code42 has a few primary use cases–backup and recovery, device migration, etc.–we’ve learned that our different customers use Code42 in different ways. To explore how customers use our product, we recently partnered with the talented team at creative agency Crash+Sues to create a series of animated videos featuring the voices and likenesses of actual Code42 users.

In our latest video, Naazer Ashraf, senior computing consultant at Lehigh University, explains why they rely on Code42 over sync and share products for data backup and restore. As one of the nation’s premier research universities, Lehigh’s faculty are known for their excellence in research. Obviously, data is extremely important (and valuable) to researchers, so imagine the reaction when one researcher deleted files from Google Drive to save space–and discovered that doing so wiped the files for 10 other researchers. Naazer tells the story in just 42 seconds. Check it out below.

Protect Your Data from Insider Threats with Code42

Code42 provides your business with a variety of benefits, including increased productivity, risk mitigation, streamlined user workflows, and more – all in a single product that’s been proven to ultimately save you money. Recently, Code42 launched Security Center, a new suite of tools to help you spot suspicious data use behaviors in your workforce – and respond to them if necessary. There’s a big reason why we added this feature – the facts show that 89 percent of corporate data loss involves the actions of an insider.

We recently partnered with the talented team at creative agency Crash+Sues to create a series of videos about the core features of Code42. This most recent video focuses on an all-too common scenario in which an employee decides to steal valuable data from his employer. Unfortunately for him, this company has Code42’s Security Center.

Take a look today for an illustration of how Code42 and Security Center can help keep your enterprise’s data safe from insider threats.

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance Code42 Blog

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance

“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?

After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.

“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”

To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:

  • Compliance with security policies
  • AWS tagging requirements
  • Identifying unused resources for removal/review
  • Off-hours are enforced to maximize cost reduction
  • Encryption needs are enforced
  • AWS Security Groups are not over permissive
  • And many more…

After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.

How Code42 uses Cloud Custodian

Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.

As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.

As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.

“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”

Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.

Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more.  The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.

What policies did we implement?

There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.

Cost Savings Policy – ec2-off-hours-enforcement

EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.

Hygiene Policy – ec2-tag-enforcement

AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.

  1. The policy applies filters to look for all EC2 resources missing the required tags.
  2. When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
  3. The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.

If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.

This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.

Security Policy – S3-delete-unencrypted-on-creation

At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.

This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket.  It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.

Just starting out?

Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.

Resources:

Authors:

Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.

Byron Enos Code42

Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.

Code42 Jim Razmus

Jim Razmus II is director of cloud architecture at Code42. He tames complexity, seeks simplicity and designs elegantly. Connect with Jim Razmus II on LinkedIn.

Tips From the Trenches: Automating Change Management for DevOps

One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.

“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”

Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:

  • PREVENT issues (collusion, duplicate changes)
  • CONFIRM changes are authorized changes
  • DETECT issues (customer support, incident investigation)
  • COMPLY with regulatory requirements

Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.

Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.

We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.

This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.

New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.

“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”

So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.

But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.

We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.

Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.

Connect with Michelle Killian on LinkedIn.

Finally, a DLP for Macs

Finally, a DLP for Macs

It’s time to face the facts, Macs are everywhere in the enterprise. In fact, a 2018 survey from Jamf pointed to the fact that more than half of enterprise organizations (52%) offer their employees a choice in their device of preference. Not entirely surprising, 72% of employees choose Mac. The Apple wave within business environments has begun and only promises to grow over time.

“ Legacy Data Loss Prevention (DLP) solutions don’t account for the Mac phenomenon and were not designed with them in mind. ”

The problem is that legacy Data Loss Prevention (DLP) solutions don’t account for the Mac phenomenon and were not designed with them in mind. As a result, legacy DLPs often approach Macs as an afterthought rather than a core strategy. Customer opinions of their DLP for Macs continue to be unfavorable. In fact, last year at Jamf’s JNUC event in Minneapolis, Mac users quickly revealed their sheer frustration with DLP and how it wasn’t built for Macs. Code42 customers currently using legacy DLP vendors vented about their Mac DLP experience saying, “It just sucks!”

Naturally, we asked why.

  1. No Support – Mac updates can be fast and furious. Unfortunately, DLP has traditionally struggled to keep up with those updates. The result? Errors, Kernel panics and increased risk for data loss.
  2. No OS Consistency – We often forget that today’s businesses often use both Mac and Windows. DLP has traditionally maintained a very Windows-centric approach that has made the Mac experience secondary and inconsistent with Windows. Having two sets of users with varying levels of data risk is never good.
  3. It’s Slow – The number one issue often stems from performance-sucking agents that bring the productivity of Mac users to a screeching halt.
  4. Kernel Panics – This is worth reiterating. Macs are sensitive to anything that poses a threat, so whenever perceived unsanctioned DLP software threatens Mac, it means reboots and an increased risk of downtime.
  5. It’s Complicated – Traditional DLP still relies on legacy hardware and manual updates, which is time consuming and expensive.

Recently, Code42 unveiled its Next-Gen Data Loss Protection Solution at the RSA Conference 2019. One of the reasons our 50,000+ customers love us is precisely because of the superior Mac experience we deliver. Our next-gen DLP solution was built with the Mac user in mind. Learn more about our trusted and proven take on DLP for Mac.

Code42 Product Spotlight: Identify Risk to Data Using Advanced Exfiltration Detection

Product Spotlight: Identify Risk to Data Using Advanced Exfiltration Detection

When it comes to data loss protection, there are fundamental security questions that every organization needs to answer. These include, “Who has access to what files?” and “When and how are those files leaving my organization?”

Code42 Next-Gen Data Loss Protection helps you get answers to these questions in seconds by monitoring and investigating file activity across endpoints and cloud services. And now, Code42 has expanded its investigation capabilities to provide greater visibility into removable media, personal cloud and web browser usage by allowing security analysts to search file activity such as:

  • Files synced to personal cloud services. Code42 monitors files that exist in a folder used for syncing with cloud services, including iCloud, Box, Dropbox, Google Drive and Microsoft OneDrive.
  • Use of removable media. Code42 monitors file activity on external devices, such as an external drive or memory card.
  • Files read by browsers and apps. Code42 monitors files opened in an app that is commonly used for uploading files, such as a web browser, Slack, FTP client or curl.

Advanced Exfiltration Detection can be applied to proactively monitor risky user activity — such as the use of USBs across an organization — as well as to eliminate blind spots during security investigations. For example, imagine you’ve just learned that a confidential roadmap presentation was accidentally sent to the wrong email distribution list. Sure, it can later be deleted from the email server. But did anyone download it? Has anyone shared it? By using Code42 to perform a quick search of the file name, you can answer those questions in seconds. You’ll not only see which users have downloaded the attachment, but also that one has since saved the file to a personal Dropbox account. With this information in hand, you can quickly take action against this risky data exposure.

See Advanced Exfiltration Detection in action.


Rob-Securing-Data-in-Cloud-Chaos-Code42-Blog

Securing Data in Cloud Chaos

To succeed, every enterprise depends on data and the insights that can be gleaned from that data. Enterprises today are creating much more data than in prior years—much of it critical to their digital transformation efforts. And how this data is stored within enterprises has changed dramatically, which is having a profound impact on how that data must be secured.

How so? At one time, most enterprise data resided within enterprise databases and applications, and these applications remained (relatively) safely on enterprise endpoints or tucked back in the data center. Not anymore.

“ Gartner estimates that 80 percent of all corporate data today is actually stored unstructured. ”

That was the age of structured data. Today, data is more likely to be stored unstructured and reside in the form of word-processing files, spreadsheets, presentations, PDFs and many other common formats. The research firm Gartner estimates that 80 percent of all corporate data today is actually stored unstructured.

This means today our enterprise data is scattered everywhere. And just because it’s not structured within an application doesn’t mean the data isn’t critical – unstructured data today includes financial information, trade secrets, marketing plans and work with contractors and business partners. Not all of this data is the same nor is it managed in the same way — yet this data must be protected.

How we share unstructured data is also changing. No longer is data sent merely as email attachments. Today, data is shared through social media programs, cloud apps and communication platforms, such as Slack. In many organizations, staff are sharing sensitive data, such as consumer information, intellectual property, prospect lists, financial data and the like. Security teams need to be alerted when sensitive information is shared.

These trends should cause pause within anyone concerned about securing their enterprise information.

“ One of the most important steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. ”

According to our 2018 Data Exposure Report, 73 percent of security and IT leaders believe there is data in their company that only exists on endpoints and 80 percent of CISOs agree that they can’t protect what they can’t see. Seventy-four percent believe IT and security teams should have full visibility over corporate data.

Unfortunately, without a dedicated and continuous focus on securing unstructured data, such visibility won’t ever exist. Only chaos. 

Yes, most organizations take reasonable steps to protect their applications and databases from costly data breaches. They invest in endpoint technologies that protect their users’ endpoints from malware. They focus on database security, application security and related efforts. And they try to control access to their local enterprise network. But the challenging reality remains: even if an organization executed perfectly on such a security architecture, it would still leave itself open to a vast amount of data theft and exploitation. The reason is that organizations are ignoring roughly 80 percent of their data that exists unstructured. 

Legacy security methods haven’t kept pace

It’s critical enterprises get the security of their unstructured data right. Securing unstructured data is different than securing data stored within applications and databases. 

One of the most important, and likely first, steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. Other capabilities they’ll need in place will include monitoring who has access to that data, the ability to index file content across storage devices, cloud storage, and cloud services, and monitor that data for potential data loss, misuse and theft.

Having these capabilities in place will not only help organizations to better secure that data and identify careless handling of data or even malicious insiders, but also improve the ability to conduct in-depth investigations and identify threats, preserve data for regulatory compliance demands and litigation situations, and rapidly recover lost or ransomed files.

The fact is that unstructured data is 80 percent of enterprise data today, and the places it’s being stored are expanding. It’s imperative you give it the appropriate level of focus. While you can’t put the unstructured data back in the centralized data center again, you can bring a structured approach to data security that will reign in the chaos and adequately protect your enterprise information.

Code42 Data Loss Protection: Redefining DLP

Data Loss Protection: Redefining DLP

Data is one of the most valuable currencies in existence today. Companies can thrive or die based on data, and attackers—from run-of-the-mill hackers, to cybercrime syndicates, to nation states—aggressively target data. It’s no wonder that an entire industry of tools and practices exists for the sole purpose of securing and protecting data. However, data loss and data breaches are still a constant concern.

Perhaps the model of data loss prevention—or DLP—itself is flawed? I recently had an opportunity to speak with Vijay Ramanathan, senior vice president of product management at Code42, about this issue and about the unique perspective Code42 has on solving the DLP problem.

“Fundamentally—at its core—even the notion of what DLP stands for is different for us,” opened Vijay. “You know DLP as ‘data loss prevention’. We approach it as ‘data loss protection’.”

“ Rather than focusing all of the attention on prevention, it’s important to acknowledge that there’s a high probability that incidents will still occur, and have the tools in place to detect when that happens. ”

That is clever and makes for good marketing but changing a word around is just semantics. I asked Vijay to explain what that means for customers, and why he—and Code42—believe that is a superior or more effective way to tackle this problem.

He explained, “We want to look at data and data loss more holistically rather than just putting prevention strategies in place.” He went on to compare the approach to the way we treat other things in life—like our homes. He pointed out that people have locks on doors to prevent unauthorized access, but that many also augment them with alarm systems, and surveillance cameras and home insurance to create a well-rounded home security strategy. Data security should be no different. 

Traditional DLP is fundamentally flawed

Vijay described why the traditional approach to DLP is broken.

The standard model of DLP requires organizations to define which data is sensitive or confidential, and which data is trivial or meaningless. There has to be an initial effort to catalog and assign classifications to all existing data, and an ongoing process for users to assign classification tags to data as new data is created. 

If you only have a few people, or a relatively small amount of data, this approach may be feasible. But, for most organizations, it is challenging—bordering on impossible—to effectively implement data labeling policies, or maintain accurate asset tagging at scale.

The second issue Vijay mentioned was that DLP often creates new issues. He told me that data classification and data handling policies are designed to prevent bad things from happening, but implementing additional policies is like protecting your home by building a taller fence. It only goes so far as a means of data protection, and it forces bad behavior by users. Employees who just want to get their jobs done will often subvert or circumvent the system, or intentionally mis-classify data to avoid draconian policies.

“ Time to awareness or time to response is the most critical issue in cybersecurity today. The lag time before a company discovers a data loss incident is crucial. ”

Protection rather than prevention

So, what does Code42 do differently, and how does that translate to better data security? I asked Vijay to explain how the Code42 approach of data loss protection addresses these issues. 

“The whole approach of traditional DLP solutions seems highly problematic,” proclaimed Vijay. “Why don’t we just assume that all of the data is important? What’s important then is to make sure you understand what is happening with your data, so you can make reasonable, informed judgments about whether that access or activity makes sense or not.”

More locks and taller fences might work to some extent, but they will never be impervious. Rather than focusing all of the attention on prevention, it’s important to acknowledge that there’s a high probability that incidents will still occur, and have the tools in place to detect when that happens. 

Vijay stressed the importance of response time—and how quickly an organization can know what is happening. “Time to awareness or time to response is the most critical issue in cybersecurity today. The lag time before a company discovers a data loss incident is crucial.”

He explained that Code42 adopted a next generation DLP philosophy with no policies and no blocking. Code42 assumes that all data is important and provides customers with the visibility to know who is accessing it and where it is going, and the ability to monitor and alert without preventing users from doing legitimate work or hindering productivity in any way.

With this philosophy in mind, the company recently introduced its Code42 Next-Gen Data Loss Protection solution. It monitors and logs all activity. Within minutes of an event, Code42 can let you know that a file was edited or saved. Within 15 minutes, the file itself is captured and stored in the cloud. Customers can store every version of every file for as long as they choose to retain the data. Code42 also provides an industry-best search capability that allows all data from the previous 90 days to be quickly and easily searched at any time. 

Vijay shared that he believes the Code42 Next-Gen Data Loss Protection approach to data security —is a better and more effective way to address this problem. Taking blocking and policies out of the equation makes it easier to administer and allows users to focus on being productive and efficient. The DVR-like ability to review activity from the past and establish focus on suspicious activity day-in-day-out provides customers with the peace of mind they need that their data is safe and sound.

Tips-From-the-Trenches-Security-Design_Code42_Blog

Tips From the Trenches: Thinking About Security Design

Part of the success criteria for any security program is to ensure the process, control or technology utilized has some additional benefit aside from just making things “more secure.” Most controls we impose to make ourselves safer often come at the expense of convenience. But what if we took a different approach when thinking about them? A mentor of mine often starts a security design discussion by asking us to consider the following:

Why do cars have brakes?

Naturally, my first thought is that brakes allow the driver to slow or stop when going too fast. After all, a car with no brakes is dangerous, if not completely useless. However, when we consider that the braking system in the car enables the driver to go as fast as they want, the purpose of this control takes on a new meaning.

Changing perceptions about the controls we impose on security design within Information Security doesn’t come easy. Even some of the most seasoned infosec professionals will insist a particular control be in place without considering how the control impacts workflow, or worse, the bottom line.

“ As security professionals, we need to design controls that empower our business in the safest way possible, without getting in the way of where we’re trying to go. ”

Aligning controls and risks

Some of the most impactful security controls are the ones we don’t even realize are there. When designed correctly, they mitigate risk while providing a benefit to the user. The proliferation of biometric security is a great example of this. My mobile phone and computer offer the ability for me to access the device by simply touching or staring at it. Because I am much more focused on how convenient and easy it is to unlock my phone to look at cat pictures, I forget that these controls were designed as a security measure.

As a security professional, I do, however, need some assurance that the controls can’t be easily circumvented. For example, a quick search for exploits of fingerprint or face-recognition systems will show that they can be easily fooled with a 3D printer, some Play-Doh and a little time. However, when enhanced with an additional factor like a password or PIN, the authentication mechanism evolves to something much more difficult to compromise while being considerably easier for me to remember than a 16-character password that I have to change every ninety days.

In Information Security, this is why it’s important for us to consider how we design solutions for our environment. If all I’m protecting is access to cat pictures, is my face or fingerprint unlock enough? I’d say so. But for my Database Administrator (DBA) or Identity and Access Management (IAM) administrator to protect my company’s crown jewels? Definitely not.

Creating controls with a purpose

And this is what I think brings us to the crux of security design: as an end-user, if I don’t know why the control is there, I won’t use it or I might even try and go around it. Moreover, if I have no idea that it’s there, it better work without getting in my way.

Let’s return to the car example. My daughter just finished the process of getting her driver’s license. In doing so, just like her old man, she was subject to videos depicting the horrors of car accidents and negligent driving. Way back in my day, the message was clear: driving death was thwarted by seatbelts and the ten-and-two. For her, it’s not texting and driving and the eight-and-four. I have absolutely no idea how a seatbelt can help me avoid an accident, but I’m crystal clear why I need one, should it happen. If I ask her about texting-and-driving, she’ll be equally clear that it’s possible to kill someone while doing it.

Getting back to the topic of security design, if I don’t understand why I need the control, it’s better that I have no awareness it’s around. Just like an airbag, I need to trust it’s there to protect me. On the flip side, I definitely need to know the importance of buckling up or putting my phone in the glovebox so I can keep my eyes on the road.

Transparent security

And this is what excites me about what we’re building at Code42 with our Code42 Next-Gen Data Loss Protection solution. Transparent security.

In the traditional Data Loss Prevention (DLP) space, transparent security is not an easy task. More often than not, people just trying to do their jobs end up getting blocked by a one-size-fits-all policy. Our application, on the other hand, enables security administrators to come together in a way that gives the business what they want: protection for their best ideas without Security getting in the way.

Computers, just like cars, can be dangerous and yet, each of us can’t imagine a life without them. Their utility demands they be safe and productive. As security professionals, we need to design controls that empower our business in the safest way possible, without getting in the way of where we’re trying to go.

A Hot Topic for RSA: Debunking Traditional DLP Code42 Blog

A Hot Topic for RSA: Debunking Traditional DLP

Hundreds of vendors and tens of thousands of cybersecurity professionals will invade San Francisco in a few weeks for the 2019 RSA Conference. The streets around Moscone Center will be filled with buses and cars emblazoned with cybersecurity vendor marketing messages, and the level of pedestrian traffic will skyrocket. When I consider what cybersecurity professionals are looking forward to at the event, it’s not only an opportunity to explore new technologies, but also new ways of thinking about data security. 

It’s all about the data.

Ultimately, people are looking for solutions to their security challenges. They are looking for the technologies that will help them manage their security posture and answer fundamental questions about data: Where is my data? Who has access to my data? How can I monitor when data is leaving my network? How do I know what data is leaving my organization? Bottom line—how can I protect my data?

“ We’re looking forward to RSA to talk about a new approach to data security. In fact, it’s a whole new take on Data Loss Prevention. ”

“I love my DLP.” Said no one ever.

At Code42, we’re looking forward to RSA to talk about a new approach to data security. In fact, it’s a whole new take on Data Loss Prevention (DLP). At its core, our approach debunks the fundamental requirements of policies, classifications and blocking — the things that we’ve learned to love to hate about DLP. And there are other major advantages to our new solution. It lives in the cloud, eliminates long deployments, and gives security teams visibility to every version of every file. We call it Code42 Next-Gen Data Loss Protection— a solution that is defined not by what you can prevent, but rather by how quickly you can detect, assess and respond to threats and reduce business risk.

Let’s face it. Gone are the days where you can build walls big enough to prevent data from getting outside your organization. Traditional DLP solutions aren’t working. The reality is that complicated and policy-laden security strategies run counter to the needs of today’s IP-rich, culturally progressive organizations, which thrive on mobility, collaboration and speed to get work done. 

Next-Gen Data Loss Protection is a challenge to the status quo.  

Yes, the endgame of Code42 Next-Gen Data Loss Protection is a direct challenge to the status quo. It offers businesses a quicker, easier way to protect their organization’s endpoint and cloud data from loss, leak, misuse and theft.

Are you ready to hear more about a different take on data loss protection and see it in action? When you’re at RSA, stop by and visit the Code42 team at Booth S 1359 (in the South Expo Hall). We’ll be conducting product demos, and we’ll have donuts on Wednesday and Thursday morning. Make sure you get there before we run out.

We Are All Surfing with the Phishes

We Are All Surfing with the Phishes

Phishing is in the news again – and for good reason. Last month, the story first came to light regarding a “megabreach” drop of 773 million email and password credentials. At first, this disclosure made a sizable splash. But as researchers dug in further, it turned out the dump of online credentials had been circulating online for some time, as independent security journalist Brian Krebs covered in his blog, KrebsonSecurity. Maybe the news wasn’t as big of a deal as we first thought? 

The news turned out to be bigger, in some ways. More large tranches of credentials continued to be uncovered in the days that followed. These new collections of credentials bring the total to 2.2 billion records of personal data made public. Even if the vast amount of these records is old, and by all estimates they probably are, this massive collection of information substantially increases the risk of phishing attacks that will target these accounts after they’d been pushed above ground.

“ According to the State of the Phish Report, since 2017 credential-based compromises increased 70 and 280 percent since 2016. ”

Phishing remains one of the most common and, unfortunately, successful attacks that target users – and it’s not just user endpoints that are in the sights of the bad guys. Often, phishers aim first at users as a way to get closer to something else they are seeking, perhaps information on corporate executives, business partners, or anything else they deem valuable. When an employee clicks on a link or opens a maliciously crafted attachment, his or her endpoint can then be compromised. That not only makes a user’s data at risk from compromise or destruction, such as through a ransomware attack, but attackers can also use that endpoint as a platform to dig deeper into other networks, accounts and cloud services. 

Consider ProofPoint’s most recent annual State of the Phish Report, which found that 83 percent of global information security respondents experienced phishing attacks in 2018. That’s up considerably from 76 percent in 2017. The good news is that about 60 percent saw an increase in employee detection after awareness training. According to the State of the Phish Report, since 2017 credential-based compromises increased 70 and 280 percent since 2016. 

Unfortunately, the report also found that data loss from phishing attacks has tripled since 2018. Tripled.

“ Someone is going to click something bad, and antimalware defenses will miss it. ”

This latest uncovering of credentials is a good reminder as to why organizations always have to have their defenses primed against phishing attacks. These defenses should be layered, such as to include security awareness training, antispam filtering, and endpoint and gateway antimalware, along with comprehensive data protection, backup and recovery capabilities for when needed, such as following a malware infection or successful ransomware attack. 

However, even with all of those controls in place, the reality is that some phishing attacks are going to be successful. Someone is going to click something bad, and antimalware defenses will miss it. The organization needs to be able to investigate successful phishing attacks. This includes investigating and understanding the location of IP addresses, gaining insights into the reputation of internet domains and IP addresses, and establishing workflows to properly manage the case. These investigations can help your organization protect itself by blocking malicious mail and traffic from those addresses, notify domain owners of bad activity, and even assist law enforcement. 

When you find a file that is suspected of being malware, you can then search across the organization for that file. Chances are that, if it was a malicious file in the phishing attack, it may have targeted many people in the organization. Nathan Hunstad details how, in his post Tips From the Trenches: Enhancing Phishing Response Investigations, our hunt file capability integrates with security orchestration, automation and response (SOAR) tools to rapidly identify suspicious files across the organization and swiftly reduce risk. 

There’s another lesson to be learned here, one that is a good reminder for your organization and your staff: We are all on the dark web, where much of its information is about us – all of the information that has been hacked over the years; such as financial information, Social Security numbers, credit reports, background checks, medical information, employment files, and, of course, emails and logon credentials, is likely to be found there. 

That’s why, while much of the information in this trove of credential information that has surfaced from the depths of the web turned out to be old information, it doesn’t mean there aren’t lessons here that need reminding. For instance, it is critical to assume the increased risks as a result of all of the information that is out there and how it can be used in phishing attacks.

Tips From the Trenches: Using Identity and Access Management to Increase Organizational Efficiencies and Reduce Risk

Tips From the Trenches: Using Identity and Access Management to Increase Efficiencies and Reduce Risk

As a security company, it’s imperative that we uphold high standards in every aspect of our security program. One of the most important and foundational of these areas is our Identity and Access Management (IAM) program. As part of Code42’s approach to this program, we have identified guiding principles that have a strong focus on automation. Below is an outline of our journey.

IAM guiding principles

Every IAM program should have guiding principles with HR, IT and security. Here are a few of ours:

1. HR would become the source of truth (SoT) for all identity lifecycle events, ranging from provisioning to de-provisioning.

The initial focus was to automate the provisioning and de-provisioning process, then address the more complex transfer scenario at a later phase. HR would trigger account provisioning when an employee or contractor was brought onboard, and shut off access as workers left the company. Further, the HR system would become authoritative for the majority of identity related attributes for our employees and contractors. This allowed us to automatically flow updates made to an individual’s HR record (e.g. changes in a job title or manager) to downstream connected systems that previously required a Help Desk ticket and manual updates.

2. Our objectives would not be met without data accuracy and integrity.

In-scope identity stores such as Active Directory (AD) and the physical access badge system had to be cleansed of legacy (stale) and duplicate user accounts before they were allowed to be onboarded into the new identity management process. Any user account that could not be matched or reconciled to a record in the SoT system was remediated. Although a rather laborious exercise, this was unquestionably worth it in order to maintain data accuracy.

3. Integrate with existing identity infrastructure wherever possible.

We used AD as our centralized Enterprise Directory, which continues to function as the bridge between on-prem and cloud identity broker, Okta. Integrating with AD was of crucial importance as this would allow us to centrally manage access to both on-premise and cloud based applications. When a worker leaves the company, all we need to do is ensure the user account is disabled in AD, which in turn disables the person’s access in Okta.

Once we had agreement on our guiding principles, it was time to start the design and implementation phase. We built our solution using Microsoft’s Identity Manager (MIM) because our IAM team had used Microsoft’s provisioning and synchronization engine in the past and found it to be easy to configure with many built-in connectors and extendable via .NET.  

IAM implementation phases

Identity in every organization is managed through a lifecycle. Below are two of the identity phases we have worked through and the solutions we built for our organization:

1. Automating provisioning and deprovisioning is key, but can also cause challenges.

One challenge we had was a lag between a new employee starting and employee records being populated in systems that act as the source of truth. This doesn’t allow lead time to provision a user account and grant access for the incoming worker. We solved this obstacle by creating an intermediate “SoT identity” database that mirrors the data we receive from our HR system. From there, we were able to write a simple script that ties to our service desk and creates the necessary database entry.

The next challenge was to automate the termination scenario. Similar to most companies, our HR systems maintain the user record long past an employee’s departure date for compliance and other reasons. Despite this, we needed a way to decommission the user immediately at time of departure. For this, we developed a simple Web Portal that allows our Helpdesk and HR partners to trigger termination. Once a user is flagged for termination in the Portal, the user’s access is automatically disabled by the identity management system. De-provisioning misses are a thing of the past!

2. Re-design and improve the access review process.

This phase aims to replace our current manual, spreadsheet-based, quarterly access certification process with a streamlined process using the built-in review engine in the identity management tool.

Implementing IAM at Code42 has been an awesome experience; and with the impending launch of the request portal, this year will be even more exciting! No matter how far along you are in your IAM implementation journey, I hope the concepts shared here help you along the way.