42 Seconds with a Code42 Customer: Lehigh University

Code42 provides your business with a variety of data security benefits, including increased productivity, risk mitigation, streamlined user workflows, and more–all in a single product that’s been proven to ultimately save you money. While Code42 has a few primary use cases–backup and recovery, device migration, etc.–we’ve learned that our different customers use Code42 in different ways. To explore how customers use our product, we recently partnered with the talented team at creative agency Crash+Sues to create a series of animated videos featuring the voices and likenesses of actual Code42 users.

In our latest video, Naazer Ashraf, senior computing consultant at Lehigh University, explains why they rely on Code42 over sync and share products for data backup and restore. As one of the nation’s premier research universities, Lehigh’s faculty are known for their excellence in research. Obviously, data is extremely important (and valuable) to researchers, so imagine the reaction when one researcher deleted files from Google Drive to save space–and discovered that doing so wiped the files for 10 other researchers. Naazer tells the story in just 42 seconds. Check it out below.

Forrester Offers Five Best Practices for Ransomware Protection

Ransomware has reared its ugly head again, this time bearing the name Bad Rabbit. According to analysts at Crowdstrike, Bad Rabbit shares 67 percent of the same code as NotPetya, meaning this variant may actually be the work of the same threat actor. Bad Rabbit marks the third major ransomware outbreak in 2017. With WannaCry, NotPetya, and now Bad Rabbit, the public is more aware of ransomware than ever. However, awareness is not enough to protect your organization, your employees, and your files. With every outbreak, we come to realize that prevention is never foolproof, and faster detection only gets you so far. What matters most is the speed in which you can respond and bounce back when disruptions like ransomware strike. Forrester has assembled a guide in the proper response to ransomware in the report “Ransomware Protection: Five Best Practices.” Key takeaways of the report include:

  • Avoiding a ransom payment is possible
  • Preventing ransomware doesn’t require new security investments
  • Focus on your core security needs

In addition, consider these important tips that will also help you amp up your speed of response to ransomware attacks:

The human element of ransomware doesn’t get enough attention.

Laptops and desktops are hit by ransomware most often for a simple reason: they’re operated by users. Your employees are moving fast to create the ideas that make the business run, meaning they are prime targets for threat actors. Plus, cybercriminals are getting more and more sophisticated. They’ve optimized ransomware’s “user experience” to increase the odds that a victim falls prey and inevitably pays up.

Don’t blame humans for being human.

Don’t just give them the tools and training to know the dangers, but also the tools to always bounce back when they’ve made an error. Humans will make mistakes. It’s the role of IT and security teams to minimize the disruption and impact of those mistakes, get the idea engine – your employees – back up and running, so the business keeps moving forward.

Protection requires a renewed focus on IT and security basics.

It’s these basics that Forrester analysts Chris Sherman and Salvatore Schiano discuss in detail in the Forrester report. Read “Ransomware Protection: Five Best Practices” today to learn how to minimize business disruption when ransomware strikes.

Protect Your Data from Insider Threats with Code42

Code42 provides your business with a variety of benefits, including increased productivity, risk mitigation, streamlined user workflows, and more – all in a single product that’s been proven to ultimately save you money. Recently, Code42 launched Security Center, a new suite of tools to help you spot suspicious data use behaviors in your workforce – and respond to them if necessary. There’s a big reason why we added this feature – the facts show that 89 percent of corporate data loss involves the actions of an insider.

We recently partnered with the talented team at creative agency Crash+Sues to create a series of videos about the core features of Code42. This most recent video focuses on an all-too common scenario in which an employee decides to steal valuable data from his employer. Unfortunately for him, this company has Code42’s Security Center.

Take a look today for an illustration of how Code42 and Security Center can help keep your enterprise’s data safe from insider threats.

Tips-From-the-Trenches--Enhancing-Phishing-Response-Investigations

Tips From the Trenches: Enhancing Phishing Response Investigations

In an earlier blog post, I explained how the Code42 security team is using security orchestration, automation and response (SOAR) tools to make our team more efficient. Today, I’d like to dive a little deeper and give you an example of how we’re combining a SOAR tool with the Code42 Forensic File Search API — part of the Code42 Next-Gen Data Loss Protection (DLP) product —  to streamline phishing response investigations.

A typical phishing response playbook — with a boost

Below is a screenshot of a relatively simple phishing response playbook that we created using Phantom (a SOAR tool) and the Code42 Forensic File Search API:

We based this playbook on a phishing template built into the Phantom solution. It includes many of the actions that would normally be applied as a response to a suspicious email — actions that investigate and geolocate IP addresses, and conduct reputation searches for IPs and domains. We added a couple of helper actions (“deproofpoint url” and “domain reputation”) to normalize URLs and assist with case management.

You may have noticed one unusual action. We added “hunt file” via the Code42 Forensic File Search API. If a suspicious email has an attachment, this action will search our entire environment by file hash for other copies of that attachment.

“ Combining the speed of Code42 Next-Gen DLP with the automation of SOAR tools can cut response times significantly. ”

What Code42 Next-Gen DLP can tell us

Applying Code42 Next-Gen DLP to our playbook shortens investigation time. The “hunt file” action allows us to quickly see if there are multiple copies of a malicious file in our environment. If that proves to be true, it is quick evidence that there may be a widespread email campaign against our users. On the other hand, the search may show that the file has a long internal history in file locations and on endpoints. This history would suggest that the file exists as part of normal operating procedure and that we may be dealing with a false alarm. Either way, together the Code42 Next-Gen DLP API and its investigation capability give us additional file context so our security team can make smarter, and more informed and confident decisions about what to do next.

Applying Code42 Next-Gen DLP to other threat investigations

This type of “hunt file” action does not need to be limited to investigating suspected phishing emails. In fact, it could be applied to any security event that involves a file — such as an anti-virus alert, an EDR alert or even IDS/IPS alerts that trigger on file events. Using Code42 Next-Gen DLP, security staff can determine in seconds where else that file exists in the environment and if any further action is necessary.

Combining the speed of Code42 Next-Gen DLP with the automation of SOAR tools can cut response times significantly. That’s something any security team can appreciate.

As always, happy threat hunting!

Baylor University Fast-Tracks Its Windows 10 Migration with Code42

Baylor University Fast-Tracks Its Windows 10 Migration

In order to quickly gain the benefits of Windows 10, Baylor University has been fast-tracking its migration across 8,000 PCs through a strategically scheduled process that effectively handles user settings and profiles. With that many devices on campus needing to be migrated to Windows 10, Baylor University’s IT team knew it had its work cut out for it. Baylor University recently joined Code42 for a webinar detailing their Windows 10 migration journey.

“We realized that there was a need to make the process a little bit smoother, a little bit faster,” said Mike Gonzales, assistant director of system support at Baylor University. “That’s when we started working on getting things scripted to give us the ability to migrate in a faster, more automated, consistent fashion.”

“ The quicker you can get them in and out of the office so they can get back to their day, the easier an experience it is for them. The goal is to leave them in a better position than when they first started. ”

Baylor University’s migration process

One of the first steps in Baylor’s process was to ensure that the IT team could encrypt and back up their devices on pace with the speed of the Windows 10 release cadence.

Once that was established, the team decided to roll out the migration with a testing phase. After they got comfortable with the process, they were then able to migrate larger numbers of devices. They started with the devices that would have the least impact and complexity — in their case desktop computers that didn’t have third-party encrypted software installed.

It was important to keep the migration process moving along because extended support for Windows 7 ends in January 2020. So they strategically scheduled a certain number of migrations per month to meet that deadline.

Creating a consistent, scalable process has been critical for Baylor University. The goal of their process was to ensure that all users had the same positive migration experience and that the IT team could successfully and quickly migrate a large number of devices.

A quick and easy experience for users

By integrating Microsoft’s User State Migration Tool (USMT) with Code42, Baylor’s IT team developed a script that automatically recreated a user’s profile and settings so that after the migration, the device was as familiar to users as it had been previously.

“When end users log in, they see their desktop background of their kids and that’s a really good user experience,” said Brad Hodges, senior analyst programmer at Baylor.

Using cloud-based technology such as InTune or Autopilot in combination with Code42 has helped with consistency and efficiency. The team can set up 32 machines to migrate concurrently in the installation area.

Because it’s moving from Windows 7 to Windows 10, Baylor University is using a wipe-and-reload process so as not to leave behind any legacy and incompatibility issues. The process has been efficient, consistent and reliable.

“It’s a huge change for our people to go from 7 to 10,” said Gonzales. “The quicker you can get them in and out of the office so they can get back to their day, the easier an experience it is for them. The goal is to leave them in a better position than when they first started.”

Up next for Baylor University

Now that they have refined their process and made it scalable, the IT team members have been making plans to extend their migration process to their Mac devices. They are also working on a project to create a self-service model for out-of-the-box devices. Based on the new model, users can unbox their device, log in and simply run a script to configure their new device with the same settings and profiles of their previous computer.

The Best of the Blog: October 2018

Catch up on the best stories from the Code42 blog that you might have missed in October. Here’s a roundup of highlights:

Code42 Next-Gen Data Loss Protection: What DLP Was Meant to Be
Legacy DLP systems often mean heavy policy management, lengthy deployments and blocks to employee productivity. It doesn’t have to be that way. Learn how simple, fast and effective next-gen DLP can be.

Tips From the Trenches: Architecting IAM for AWS with Okta
Amazon Web Services (AWS) can open up a world of opportunity for productivity improvement. Read about how Code42 implemented AWS while meeting stringent security standards.

MacDonald-Miller Boosts Mobile Workforce Productivity with Code42 (Video)
See how MacDonald-Miller saves time and safeguards valuable company IP by developing and implementing a multi-faceted data security strategy.

Tips From the Trenches: Searching Files in the Cloud
File movement investigations require a complete picture of the whole environment. Learn how Code42’s security team locates and monitors files across endpoints and cloud services like Google Drive and Microsoft OneDrive.

Security-must-enable-people-Code42-Blog

Security Must Enable People, Not Restrain Them

Do you ever think about why we secure things? Sure, we secure our software and data so that attackers can’t steal what’s valuable to us — but we also secure our environments so that we have the safety to do what we need to do in our lives without interference. For example, law enforcement tries to keep the streets safe so that civilians are free to travel and conduct their daily business relatively free of worry.

Now consider how everyday police work keeps streets safe. It starts with the assumption that most drivers aren’t criminals. Officers don’t stop and interrogate every pedestrian or driver about why they are out in public. That type of policing — with so much effort spent questioning law-abiding citizens — would not only miss spotting a lot of actual criminal behavior, it would certainly damage the culture of such a society.

There’s a lot we can learn about how to approach data security from that analogy. Much of cybersecurity today focuses on trying to control the end user in the name of protecting the end user. There are painful restrictions placed on how employees can use technology, what files they are able to access and how they can access them. Fundamentally, we’ve built environments that are very restrictive for staff and other users, and sometimes outright stifling to their work and creativity.

This is why we need to think about security in terms of enablement, and not just restraint.

“ Security should be about enabling people to get their work done with a reasonable amount of protection — not forcing them to act in ways preordained by security technologies. ”

Prevention by itself doesn’t work

What does that mean in practicality? Consider legacy data loss prevention (DLP) software as an example. With traditional DLP, organizations are forced to create policies to restrict how their staff and other users can use available technology and how they can share information and collaborate. When users step slightly “out of line,” they are interrogated or blocked. This happens often and is mostly unnecessary.

This prevention bias is, unfortunately, a situation largely created by the nature of traditional DLP products. These tools ship with little more than a scripting language for administrators to craft policies — lots and lots of policies, related to data access and how data is permitted to flow through the environment. And if organizations don’t have a crystal-clear understanding of how everyone in the organization uses applications and data (which they very rarely do), big problems arise. People are prevented from doing what they need to do to succeed at their jobs. Security should be about enabling people to get their work done with a reasonable amount of protection — not forcing them to act in ways preordained by security technologies.

This is especially not acceptable today, with so much data being stored, accessed and shared in cloud environments. Cloud services pose serious challenges for traditional DLP solutions because of their focus on prevention. Since so many legacy DLP products are not cloud native, they lose visibility into what is happening on cloud systems. Too often, the result is that people are blocked from accessing the cloud services they need. Once again, users are treated like potential criminals — and culture and productivity both suffer.

This is also a poor approach to security, in general. As security professionals who have been around a while know, end-user behavior should never be overridden by technology, because users will find ways to work around overbearing policies. It’s just the law of governing dynamics and it will rear its head when the needs of security technologies are placed above the needs of users.

Where’s the value for users?

There is one last area I’d like to go over where traditional DLP falls short when it comes to providing user enablement, and it’s an important one. Traditional DLP doesn’t provide any tangible value back to staff and others when they are working in an environment protected with legacy DLP. All they typically get are warning boxes and delays in getting their work done.

In sum, traditional DLP — and security technology in general — doesn’t just prevent bad things from happening, it also too often prevents users from doing what they need to do. They feel restrained like criminals for simply trying to do their jobs. In actuality, a very small percentage of users will ever turn malicious. So why should we make everyone else feel like they are doing something wrong? We shouldn’t.

Code42 Next-Gen DLP

At Code42 we believe it’s essential to assume the best intentions of staff and other users. That’s why Code42 Next-Gen Data Loss Prevention focuses on identifying malicious activity, rather than assuming malicious intent from everyone. It’s why the product is built cloud-native: organizations aren’t blind when it comes to protecting popular cloud services, and users aren’t blocked from working the way they want to work. It also doesn’t require policies that need to be created and forever managed that pigeonhole users to work certain ways.

Finally, we believe in providing value to the end user. It’s why we provide backup and restore capability in Code42 Next-Gen DLP. This fundamentally gives users the freedom to make mistakes and recover from them, and it gives them the knowledge that that their data is also protected and safe.

Because it doesn’t block or interrogate users every step of the way, we believe Code42 Next-Gen DLP helps users to be more secure and productive, and enhances organization culture. It also provides the security team the opportunity to be an enabler for their end users, not an obstacle.

In this sense, Code42 Next-Gen DLP is a lot like good police work. It gives its users the freedom they need to move about the world without every motion being questioned for potential malicious intent. This is a very powerful shift in the workplace paradigm; users should be empowered to behave and collaborate as they want without fear or worry regarding the security technology in place.

Gene Kim on DevOps, Part 3: DevSecOps and Why It’s More Important Than Ever (Video)

We at Code42 were fortunate to have our good friend Gene Kim, author of The Phoenix Project and the leading expert on DevOps, stop by our office for a conversation about DevOps and its implications for security. One of the best parts of the visit for me was talking with Gene about our DevSecOps experiences here at Code42 and how we have brought security into the DevOps model.

Here at Code42, we are on a mission to secure our customers’ ideas. That’s why our DevOps journey included bringing security into the DevOps model. I’m proud to say that we’ve been profoundly successful bringing those security risk controls into our process and making it part of our engineering process.

Security is often viewed—especially by engineering— as the department of “No.” Yet, in the DevOps model, you’re trying to embody self-service and autonomy, which can be difficult to square with accountability.

As our DevSecOps model has come together, our security team has been taking the time to establish the expectations for the engineering side of the house, and we’ve been able to implement those controls. One of the most gratifying outcomes for me is, instead of an after-the-fact security scan, we’re now proactively dealing with security as we design and build our software.

Now, engineering has the freedom to do what they need to do, because they’re able to talk more openly and collegially with the security team. A lot of the answers that were “No” before, when explained in the right context, become “Yes,” because the security team can enable the engineers to move forward.

During our interview, Gene echoed the advantages of bringing security to the DevOps table. “It’s been really gratifying to see organizations … call it not DevOps but DevSecOps,” said Gene. “Truly integrating all the information security objectives into everyone’s daily work.”

Hear what else Gene had to say about DevOps and its implications for security.

If you haven’t already, be sure to check out the previous two installments in our three-part blog and video series with Gene where he talks about what it takes to become a DevOps organization and the role of culture.

Gene Kim on DevOps, Part 1: How Do You Become a DevOps Organization?

Gene Kim on DevOps, Part 2: The Cultural Impact of becoming a DevOps Org

Gene Kim on DevOps, Part 2: The Cultural Impact of becoming a DevOps Org (Video)

Gene Kim, author of The Phoenix Project and one of the most vocal thought leaders for DevOps, spent a day at Code42 headquarters in Minneapolis. During his visit, Gene talked about the optimal cultural conditions that must be in place for companies that embark on a DevOps journey and the advantages of bringing security to the table. This is the second installment in our three-part blog and video series, capturing our conversations with Gene.

As we’ve embarked on our own DevOps journey at Code42, we’ve experienced firsthand that the transformation must be embraced from a cultural perspective in order to make it happen. The core principals of DevOps require systematic thinking, coming together, gaining feedback and then at the same time, constant experimentation. For DevOps to work, it’s critical to have cultural norms that allow people to provide honest feedback without repercussions.

DevOps is not just for the engineering team. There’s something in DevOps that affects everybody from the systems architects to the operations teams to the very way in which QA is administered. In fact, the focus right now on privacy and security make the cultural perspective of DevOps more important than ever because it brings the security and engineering teams together in a very real way. That’s one of the things we at Code42 really appreciate about DevOps: that the cultural norms start to propagate around the organization, so you find groups collaborating across the company.

During my conversation with Gene, he reinforced the importance of team work. He said “Without a doubt, there has to be a sense of collegiality between information security and the engineering teams — that we are fellow team members working toward a common objective.  It’s so counter-intuitive how much more effective this is than the traditional high-ceremony and adversarial nature between infosec and everyone else!”

Listen to part two of my interview with Gene to hear what else he had to say about cultural norms, the absence of fear and empowering security.

“ Without a doubt, there has to be a sense of collegiality between information security and the engineering teams — that we are fellow team members working toward a common objective. ”

Check out the first part of our blog and video series with Gene’s for insights on how to become a DevOps org and watch for part three — why DevSecOps is more important than ever — coming soon.





Gene Kim on DevOps, Part 1: How Do You Become a DevOps Organization? (Video)

Gene Kim, author of The Phoenix Project, stopped by our offices. Gene, who is regarded in the industry as one of —if not the — most vocal enthusiasts of DevOps, is a friend of Code42 and a personal mentor of mine. I was thrilled to sit down and interview him. As a result of our visit, we created a three-part blog and video series, where we explore his views on DevOps — particularly security’s growing role. Our first installment opens with his thoughts on what goes into becoming a DevOps company.

The books Gene has written and his perspective on DevOps have changed the way we at Code42 think about our process. After going through our own DevOps journey, we’ve been optimizing our teams to improve our speed of delivery, ensuring we get our customers the features they need faster.

We are not the only ones to find ourselves on this transformational path. Many of our customers are on DevOps journeys of their own — or thinking about starting one — so we wanted to share our experiences and Gene’s best practices on becoming a DevOps organization.

When I was talking to Gene, I asked him about what it means to be a DevOps company, particularly in this day and age when security is such a top concern for businesses and consumers. We hope this video will help companies understand some of the implications and real advantages of adopting a DevOps model.

“ One of the biggest surprises coming off The Phoenix Project is just to see how much DevOps can dramatically improve the lives of not only developers, but also QA, operations and security. ”

During our conversation, Gene said, One of the biggest surprises coming off The Phoenix Project is just to see how much DevOps can dramatically improve the lives of not only developers, but also QA, operations and security.”

Be sure to check out the next two installments in our three-part blog and video series with Gene, where he talks about the role of culture in becoming a DevOps org and why DevOpsSec is more important than ever.

Security Panel: Filling Gaps in Your Security Stack (Video)

Keeping up with the constantly evolving cyber threatscape and plugging new security gaps is never-ending. We recently gathered several members of the Code42 security team for a panel discussion about how they manage threats and mitigate risks in today’s security environment. A recording of their conversation is now available on demand.

Below are a few highlights and sneak peeks from their conversation. You can check out the full panel discussion here.

Vendor risk management in the SaaS era

Maintaining visibility across a SaaS environment can be very challenging. One action our security team has taken to help address the situation is to put more focus on vendor risk management. Watch the short clip below to learn how they have defined a set of security controls for our vendor partners — and how the team holds vendors accountable to these standards.

Managing insider threats

Not all insider threats are malicious. People sometimes make mistakes and unexpected disruptions can put data at risk. In the next video clip, our security team talks about strategies to consider when implementing a comprehensive insider threat security plan that protects against both malicious and unintended threats.

Mitigating the risk of shadow IT with an application inventory

Gartner now predicts that by 2020, one-third of successful enterprise attacks will aim at unauthorized shadow IT applications. At Code42, we know that shadow IT is a big security risk. We also know it’s a risk that we’re not going to fully eliminate. Instead, our security team is focused on increasing our visibility to what applications — authorized or not — exist in our environment. In this next video clip, our panel shares how they search for instances of potentially compromised applications, so they can take quick and effective action.

How Code42 Forensic File Search fits in the security tech stack

With the wide range of security tools that already exist, where does Code42 Forensic File Search fit in a security stack? In the next video clip, our security team talks about how they use Code42 Forensic File Search in combination with other security tools and on its own to address unique use cases. The panel also talks about innovative ways that organizations can use Code42 Forensic File Search to fill existing security gaps, instantly expanding visibility and getting answers to questions like “Where is this file in our environment?”

This is just a sample of the insider knowledge shared during our panel discussion. Don’t miss your chance to hear the full conversation, on demand, right here: Code42 Security Panel: Choosing Tools to Fill Gaps in Your Security Stack

Code42 Next-Gen Data Loss Protection: What DLP Was Meant to Be

Malware and other external cyber threats get most of the headlines today. It’s not surprising, given the damage done to companies, industries and even countries by outside-in attacks on data. Despite that, insider threats — the risks of data being lost or stolen due to actions inside the company — are just as big a threat.

According to the 2018 Insider Threat Report by Cybersecurity Insiders, 90 percent of cybersecurity professionals feel vulnerable to insider threat. McKinsey’s Insider threat: The human element of cyberrisk reports that 50 percent of breaches involved insiders between 2012-2017.

“ By rethinking traditional DLP, you can know exactly where all your data is, how it is moving throughout your organization and when and how it leaves your organization — without complex policy management, lengthy deployments or blocks to your users’ productivity. ”

“The rise of insider threats is a significant threat to every business and one that is often overlooked,” said Jadee Hanson, Code42’s CISO. “While we all would like to think that employees’ intentions are good, we prepare for malicious actions taken by those from within our organizations. As external protection increases, we all should be concerned as to the influence external actors may have on those working for us and with us every day.”

Insider threats are a big deal, and traditional data loss prevention (DLP) solutions were developed to protect companies and their data from these internal events.

DLP hasn’t delivered

While traditional DLP solutions sound good in concept, most companies are only using a fraction of their capabilities. Security teams describe using these solutions as “painful.” Legacy DLP deployments take months or years, because proper setup requires an extensive data classification process, and refining DLP policies to fit unique users is complex and iterative. And after all that time, traditional DLP still blocks employees from getting their work done with rigid data restrictions that interfere with user productivity and collaboration. They also require on-site servers — counter to the growing business priority of moving solutions to the cloud.

Most importantly, legacy DLP solutions are focused on prevention. Business and security leaders now recognize that prevention alone is no longer enough. Mistakes happen, and data threats sometimes succeed. Being able to recover quickly from data loss incidents is just as important as trying to prevent them.

Rethink DLP

At Code42, we protect over 50,000 companies from internal threats to their data. This focus on protection has enabled us to see things differently, and develop an alternative to data loss prevention: data loss protection. We are excited to announce the new Code42 Next-Gen Data Loss Protection (Code42 Next-Gen DLP) solution that rethinks legacy DLP and protects data from loss without slowing down the business.

Code42 Next-Gen DLP is cloud-native and protects your cloud data as well as all of your endpoint data. It deploys in days instead of months, and provides a single, centralized view with five key capabilities:

  • Collection: Automatically collects and stores every version of every file across all endpoints, and indexes all file activity across endpoints and cloud. 
  • Monitoring: Helps identify file exfiltration, providing visibility into files being moved by users to external hard drives, or shared via cloud services, including Microsoft OneDrive and Google Drive.
  • Investigation: Helps quickly triage and prioritize data threats by searching file activity across all endpoints and cloud services in seconds, even when endpoints are offline; and rapidly retrieves actual files — one file, multiple files or all files on a device — to determine the sensitivity of data at risk.
  • Preservation: Allows configuration to retain files for any number of employees, for as long as the files are needed to satisfy data retention requirements related to compliance or litigation.
  • Recovery: Enables rapid retrieval of one file, multiple files or all files on a device even when the device is offline, or in the event files are deleted, corrupted or ransomed.

By rethinking traditional DLP, you can know exactly where all your data is, how it is moving throughout your organization and when and how it leaves your organization — without complex policy management, lengthy deployments or blocks to your users’ productivity. DLP can finally deliver on what it was originally created to do.

Architecting-IAM-for-AWS-with-Okta-Code42_Blog

Tips From the Trenches: Architecting IAM for AWS with Okta

In the last year, Code42 made the decision to more fully embrace a cloud-based strategy for our operations. We found that working with a cloud services provider opens up a world of possibilities that we could leverage to grow and enhance our product offerings. This is the story of how we implemented identity and access management (IAM) in our Amazon Web Services (AWS) environment.

IAM guiding principles

Once the decision was made to move forward with AWS, our developers were hungry to start testing the newly available services. Before they could start, we needed two things: an AWS account and a mechanism for them to log in. Standing up an account was easy. Implementing an IAM solution that met all of our security needs was more challenging. We were given the directive to architect and implement a solution that met the requirements of our developers, operations team and security team.

We started by agreeing on three guiding principles as we thought through our options:

1.) Production cloud access/administration credentials need to be separate from day-to-day user credentials. This was a requirement from our security team that aligns with existing production access patterns. Leveraging a separate user account (including two-factor authentication) for production access decreases the likelihood of the account being phished or hijacked. This secondary user account wouldn’t be used to access websites, email or for any other day-to-day activity. This wouldn’t make credential harvesting impossible, but it would reduce the likelihood of an attacker easily gaining production access by targeting a user. The attacker would need to adopt advanced compromise and recon methods, which would provide our security analysts extra time to detect the attack.

2.) There will be no local AWS users besides the enforced root user, who will have two-factor authentication; all users will come through Okta. Local AWS users have some significant limitations and become unwieldy as a company grows beyond a few small accounts. We were expecting to have dozens, if not hundreds, of unique accounts.  This could lead to our developers having a unique user in each of the AWS environments. These user accounts would each have their own password and two-factor authentication. In addition to a poor end-user experience, identity lifecycle management would become a daunting and manual task. Imagine logging into more than 100 AWS environments to check if a departing team member has an account. Even if we automated the process, it would still be a major headache.

Our other option was to provide developers with one local AWS user with permissions to assume role in the different accounts. This would be difficult to manage in its own way as we tried to map which users could connect to which accounts and with which permissions. Instead of being a lifecycle challenge for the IAM team, it would become a permissioning and access challenge.

Fortunately for us, Code42 has fully embraced Okta as our cloud identity broker. Employees are comfortable using the Okta portal and all users are required to enroll in two-factor authentication. We leverage Active Directory (AD) as a source of truth for Okta, which helps simplify user and permission management. By connecting Okta to each of our AWS accounts, users can leverage the same AD credentials across all AWS accounts — and we don’t need to make any changes to the existing IAM lifecycle process. Permissioning is still a challenge, but it can be managed centrally with our existing IAM systems. I will describe in greater detail exactly how we achieved this later in the post.

3.) Developers will have the flexibility to create their own service roles, but will be required to apply a “deny” policy, which limits access to key resources (CloudTrail, IAM, security, etc.).  As we were creating these principles, it became clear that the IAM team would not have the bandwidth to be the gatekeepers of all roles and policies (how access is granted in AWS). Developers would need to be empowered to create their own service roles, while we maintained control over the user access roles via Okta. Letting go of this oversight was very difficult. If not properly managed, it could have opened us up to the risk of malicious, over-permissioned or accidental modification of key security services.

Our initial solution to this problem was to create a “deny” policy that would prevent services and users from interacting with some key security services. For example, there should never be a need within an application or microservice to create a new IAM user or a new SAML provider. We notified all users that this deny policy must be attached to all roles created and we used an external system to report any roles that didn’t have this deny policy attached.

Recently, AWS released a new IAM feature called permission boundaries. The intent of permission boundaries is similar to that of our deny policy.  By using permission boundaries we can control the maximum permissions users can grant to the IAM roles that they create.  We are planning to roll this out in lieu of the deny policy in the very near future.

Example of a role found without the deny policy attached

Implementing Okta and AWS

When thinking through connecting Okta and AWS, we were presented with two very different architectural designs: hub and spoke and direct connect. The hub and spokedesign leverages an AWS landing account that is connected to Okta. Once logged in to this account, users can switch roles into other AWS accounts that are authorized. The direct connect design we implemented creates a new Okta application icon for each AWS account. Users access their accounts by visiting their Okta homepage and selecting the account they want to use.

Power users tend to prefer the hub and spoke model, as this allows them to quickly jump from account to account without re-logging in or grabbing a new API token. The more casual users prefer to have all accounts presented on one page. They aren’t swapping among accounts, and it isn’t fair to ask them to memorize account numbers (or even exact short names) so they can execute an assume role command. In addition to user experience, we considered how easy it was to automate management once a new account has been created. The two approaches each have merit, so we decided to implement both.

When a new account is created, it is bootstrapped to leverage the hub and spoke landing account. Automation can immediately start working with the account, and certain power users get the access they need without any IAM intervention. The IAM team can revisit the account when convenient and stand up the direct connection to Okta. New Okta features, currently in beta, will improve this direct connect process.

One final thing I would like to touch on is how we leverage the Okta launcher to get API tokens in AWS. One of the benefits of having local users in AWS is that each user is given their own API key. While this is a benefit to end users, these keys are very rarely rotated and could present a significant security risk (such as an accidental public GitHub upload). To address this, Okta has created a java applet that generates a temporary AWS API key. The repo can be found here. Like many other companies, we have created wrappers for this script to make things as easy as possible for our end users. After cloning the repo, a user can type the command “okta -e $ENV_NAME” and the script will reach out to Okta and generate an API key for that specific AWS account.  The users do need to know the exact environment name for this script to work, but most power users that need API access will have this information.

No matter where your company is on the path to leveraging a cloud service provider, IAM is a foundational component that needs to be in place for a successful and secure journey. If possible, try to leverage your existing technologies to help improve user experience and adoption. I hope the principles we shared here help you think through your own requirements.

Facebook Twitter Google LinkedIn YouTube