Addressing the Security Talent Shortage From Within - Code42 Blog

Tips From the Trenches: How I Moved From Mattress Sales to Malware

Yeah. You read that right. I’m an information security analyst now, but it wasn’t long ago that I was living in the heart of Silicon Valley…selling mattresses!

So there I was, in my early 20s. I’d missed the first .com gold rush, I had no degree and I basically used my laptop to play World of Warcraft. But, selling mattresses DID give me some advantages. Besides being extremely lucrative at the time, no one bought mattresses online yet, “product testing” consisted of taking naps on expensive beds, making sure the massage chairs worked properly and getting paid to talk to people about sleeping — a favorite pastime of mine to this day. I had a lot of downtime…so, I started studying.

After a short stint in banking, I landed a sales gig at a tech startup. I was 33 and just getting into the technology space. Sales is a hard habit to kick!

Next, I was living in Minnesota and looking for yet another sales gig. This time in Silicon Prairie. At this point, I’d heard of Code42 and knew that’s where I wanted to be. I told my soon-to-be director that I didn’t care what the role was, I wanted in. I knew I could figure things out from there. A week later, I was on an amazing business development team.

“ I’m not saying information security is for everybody, I’m saying information security is for anybody with the drive and passion to self educate, move outside your comfort zone and be brave enough to introduce yourself to perfect strangers! ”

By now you’re asking, “What does any of this have to do with information security?” At least I would be. Hang in there, we’re close. The context here matters. Understand that at this point, I’d been in sales for more than twenty years!

Then, two things happened. First, I attended what we call “Experience Week.” Essentially, it’s a week of getting to know the leadership team, the culture and our co-workers at Code42. Our CEO Joe Payne got up to speak. I’m sure it was informative and truly inspirational but I mostly remember one thing he said, “Here at Code42 we have a value: Get it done. Do it right. And if you’re getting it done and doing it right and you want to do something else, tell us. We’ll help in any way we can.” Sometimes you hear these things from leadership, and it doesn’t actually mean anything. But I decided to put this to the test.

At the same time, I just happened to be reading “Managing Oneself” by Peter F. Drucker (a must-read for any professional BTW). There was one statement that hit me like a ton of bricks: “After 20 years of doing very much the same kind of work, people are very good at their jobs…and yet they are still likely to face another 20 if not 25 years of doing the same kind of work. That is why managing oneself increasingly leads them to begin a second career.” This was becoming a theme for me, so I figured this was my chance to leap out of my comfort zone and reach for something exciting!

I knew, with every bone in my body, I did NOT want to spend the next 20+ years of my professional life generating my income by convincing others to part with theirs. So, now what?

Well, after consulting with my personal board of directors and a whole lot of prayer, I took a look at the digital landscape and knew I wanted to transition into security. The decision was based on learning some key elements of the security space:

  • There is currently 3 million unfilled cybersecurity positions globally. ((ISC)2 Workforce Study)
  • 52% of CISO respondents named “communication & people skills” as a top quality in potential candidates. (Dark Reading)
  • No IT degree required!

Opportunity? Check. Can I talk to people? Double check. No IT degree required? Check. (And, whew!)

Evan Francen of FRSecure is fond of saying, “Get into security! There’s plenty of work to go around.” OK…thanks Evan! Uhhh, how?

“ Luckily, there is an exhaustive amount of resources available in the wild for anyone curious enough to look. ”

Luckily, there is an exhaustive amount of resources available in the wild for anyone curious enough to look. Believe me, I checked out every free resource known to man. But while I was building knowledge, I wondered if it would be enough to get my foot in the door.  My inner sales guru said, “No grasshopper, you need to meet people who can help.” I’d say to anyone at this point — what really makes a difference for someone without the degrees or the experience is your ability to demonstrate passion and enthusiasm for security and a real desire to establish and foster genuine relationships with folks that are already in the security world. My new contacts in security had that passion — and I needed to show I did, too!

With our internal security team I sought out and requested time to chat with anyone who would humor me, peppered them with questions and afterward, made sure to send them each a handwritten ‘thank you’ note.

Second, and probably the most important, I ACTED on their suggestions. The worst thing you can do is ask people for their advice and then completely ignore their recommendations.

By this point I had the bug and I wasn’t going to take no for an answer. I even took my sales skills on a road show. Here’s what I did:

  • I took PTO to attend security conferences and trade shows.
  • I found security happy hours and meetups where I could network with other security professionals.
  • I found no shame in doggedly hounding my CISO to give me a shot.
  • I found opportunities to interact with her and the security team. Even going so far as to show up, front row, to a panel discussion she was speaking on ABOUT the talent shortage in the security field. A bit creepy? Sure. Effective? Well, two months later I was offered a role as an information security analyst.

I’m not saying information security is for everybody, I’m saying information security is for anybody with the drive and passion to self educate, move outside your comfort zone and be brave enough to introduce yourself to perfect strangers! You don’t have to be super technologically savvy (although that certainly helps) or have a masters in computer science, or be some hacker in a basement wearing a black hoodie bent over a keyboard trying to take down “the man.”

Start with taking a look at the industry — do your research, make sure to network with people (security folks are often excited to share their knowledge), be a part of something bigger than yourself and want to be one of the good guys! Teaching people security is easy — it’s having the chops and the drive that’s up to you.

Now, the work begins! Go get ‘em, grasshopper!

Connect with Josh Atkinson on LinkedIn.

Tips From the Trenches: Security Needs to Learn to Code Code42 Blog

Tips From the Trenches: Security Needs to Learn to Code

In the old days, security teams and engineering teams were highly siloed: security teams were concerned with things like firewalls, anti-virus and ISO controls, while engineering teams were concerned with writing and debugging code in order to pass it along to another team, like an operations team, to deploy. When they communicated, it was often in the stilted form of audit findings, vulnerabilities and mandatory OWASP Top Ten training classes that left both sides feeling like they were mutually missing the point.

While that may have worked in the past, the speed at which development happens today means that changes are needed on both sides of the equation to improve efficiency and reduce risk. In this blog post, I’ll be talking about why security teams need to learn to code (the flip side of the equation, why engineering teams need to learn security, may be a future blog post).

“ Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. ”

While it’s not uncommon for people to come into security having done code development work in the past, it is not necessarily the most typical career path. Oftentimes, people come into the security realm without any coding experience other than perhaps a Java or Python course they took at school or online. Because security encompasses so many different activities, there would appear to be no downside if security folks outside of a few highly specialized roles, like penetration testing, didn’t have coding experience. However, I’m here to tell you that coding can be beneficial to any security professional, no matter the role.

Let’s start with automation. No matter what you are doing in security, odds are that you have some kind of repeatable process, such as collecting data, doing analysis, or performing some action, that you can automate. Fortunately, more and more applications have APIs available to take advantage of, and are therefore candidates for writing code to do the work so you don’t have to.

At this point, you may think that this sounds a lot like a job for a Security Orchestration Automation and Response (SOAR) tool. A SOAR tool can absolutely be used to automate activities, but already having a SOAR tool is certainly not a requirement. A simple script that ties together a couple of applications via an API to ingest, transform and save data elsewhere may be all you need in order to start getting value out of coding. Plus, this can be a great way to determine how much value you may be able to get out of a full-blown SOAR tool.

Learning to code won’t just help your own efficiency. Writing your own code can help make all of those OWASP Top Ten vulnerabilities much more concrete, which can lead to better security requirements when collaborating with engineers. Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. It’s also incredibly valuable to be able to give engineers concrete solutions when they ask about how to remediate a particular vulnerability in code.

Here at Code42, our security team believes strongly in the value of learning to code. That’s why we’ve set a goal for our entire security team, no matter the role, to learn how to code and to automate at least one repetitive activity with code in 2019. By doing this, we will make our overall security team stronger, work more efficiently and provide more valuable information to our engineering teams.

Happy coding!

Connect with Nathan Hunstad on LinkedIn.

Tips From the Trenches: Providing Value Through Business Continuity Code42 Blog

Tips From the Trenches: Providing Value Through Business Continuity

No matter what we do in our jobs, we all want to provide value back to the organizations where we work. With some jobs, tangible evidence of value is very apparent, such as hitting your sales quota or building code for a feature in your software. In business continuity, that can be a bit of a challenge. To start, most people don’t understand what it is, or what responsibilities are tied to it. If someone asks me what I do, and my response is: “business continuity,” the conversation usually goes a different direction shortly thereafter. This makes it a challenge from the get-go in showing value to your company.

“ If ensuring value to the company is at the center of your decisions, it will go a long way in leading to a successful business continuity program. ”

Here are a few key principles I have learned in my business continuity journey, that have helped me show value within my organization:

Leadership buy-in

Real simple, your business continuity program has to have this in order to succeed. If you think you’re fully prepared to respond and recover from a disaster without buy-in from leadership, you’re kidding yourself. Leadership needs to understand what you’re doing, why you’re doing it and how it will benefit their department and the company as a whole. This will give you top-level support and make your job easier. Having guidance from above will ensure your requests for resources for the purposes of a business impact analysis and recovery testing will be given.

No doubt getting leaderships attention can be a challenge, but it has to happen. I have been a part of organizations that didn’t have it, and the result was a program that could never meet its full potential because our requests for time and effort from other departments were never a priority.

At Code42, we worked with each member of our executive leadership team to outline what we were doing, why we’re doing it and what assistance we would need from their department. Department leaders were then able to give direction on who they wanted us to work with and set the whole program in motion.

Narrow the scope of your program

On the surface this seems counterintuitive. Why not cover every function and supporting activity? The reasoning is that most companies don’t have a dedicated team of employees focused on business continuity. For some, business continuity is simply one of many responsibilities they hold. Along with manpower, the further you head into supporting functions and away from what’s really critical, the lower the rate of return for the company. The key is to focus on what’s critical. I have experienced it firsthand, where my drive to make sure all business functions were documented and prepared for. It had me spending countless hours covering the full spectrum of the business. By the time I was finished, the data was already out of date and amounted to poor use of resources with little to no value for the company.

When we worked with each member of the executive leadership team at Code42, we kept our scope to the top two critical functions that each department performs. This helped our program avoid the minutiae and focus squarely on what’s critical for supporting our product, our customers and our employees.

Make the information accessible

The information for your business continuity program should not be sequestered away from your employees, it should be easy to view and update. This is a rather obvious statement, but one that I have seen many companies struggle with. Here at Code42, we made a misstep by thinking the solution to our business continuity challenges lie within a continuity software provider. The intent was for it to help us manage all of our data, produce plans and be a one-stop shop for all things business continuity. Not long after onboarding, challenges started to emerge. The biggest challenge, was the information was not accessible to the workforce. The other was that it didn’t tie in to any software already in use at Code42. It was on an island, and of little to no value to the business. A pivot was needed, and thankfully we didn’t have to go far for an answer.

The answer came from taking a step back and determining what tools employees use across the company on a day-to-day basis. For us, the answer laid within Confluence, which serves as our internal wiki. This is where we build out department focused pages and their respective critical functions, and dependencies. Connecting to Confluence allowed us to tie in another company-wide application, JIRA, for tickets related to vendor assessments and risk and incident tickets. Our focus throughout the process was to ensure value was being passed on to Code42 and its employees, and the key piece to that was having information easily accessible.

Business continuity has a number of inherent challenges, but if ensuring value to the company is at the center of your decisions it will go a long way in leading to a successful program. I hope these principles I laid out help you provide better value to your own company.

Connect with Loren Sadlack on LinkedIn.

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance Code42 Blog

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance

“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?

After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.

“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”

To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:

  • Compliance with security policies
  • AWS tagging requirements
  • Identifying unused resources for removal/review
  • Off-hours are enforced to maximize cost reduction
  • Encryption needs are enforced
  • AWS Security Groups are not over permissive
  • And many more…

After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.

How Code42 uses Cloud Custodian

Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.

As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.

As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.

“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”

Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.

Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more.  The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.

What policies did we implement?

There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.

Cost Savings Policy – ec2-off-hours-enforcement

EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.

Hygiene Policy – ec2-tag-enforcement

AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.

  1. The policy applies filters to look for all EC2 resources missing the required tags.
  2. When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
  3. The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.

If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.

This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.

Security Policy – S3-delete-unencrypted-on-creation

At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.

This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket.  It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.

Just starting out?

Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.

Resources:

Authors:

Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.

Byron Enos Code42

Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.

Code42 Jim Razmus

Jim Razmus II is director of cloud architecture at Code42. He tames complexity, seeks simplicity and designs elegantly. Connect with Jim Razmus II on LinkedIn.

Tips From the Trenches: Automating Change Management for DevOps

One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.

“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”

Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:

  • PREVENT issues (collusion, duplicate changes)
  • CONFIRM changes are authorized changes
  • DETECT issues (customer support, incident investigation)
  • COMPLY with regulatory requirements

Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.

Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.

We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.

This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.

New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.

“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”

So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.

But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.

We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.

Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.

Connect with Michelle Killian on LinkedIn.

Tips-From-the-Trenches-Security-Design_Code42_Blog

Tips From the Trenches: Thinking About Security Design

Part of the success criteria for any security program is to ensure the process, control or technology utilized has some additional benefit aside from just making things “more secure.” Most controls we impose to make ourselves safer often come at the expense of convenience. But what if we took a different approach when thinking about them? A mentor of mine often starts a security design discussion by asking us to consider the following:

Why do cars have brakes?

Naturally, my first thought is that brakes allow the driver to slow or stop when going too fast. After all, a car with no brakes is dangerous, if not completely useless. However, when we consider that the braking system in the car enables the driver to go as fast as they want, the purpose of this control takes on a new meaning.

Changing perceptions about the controls we impose on security design within Information Security doesn’t come easy. Even some of the most seasoned infosec professionals will insist a particular control be in place without considering how the control impacts workflow, or worse, the bottom line.

“ As security professionals, we need to design controls that empower our business in the safest way possible, without getting in the way of where we’re trying to go. ”

Aligning controls and risks

Some of the most impactful security controls are the ones we don’t even realize are there. When designed correctly, they mitigate risk while providing a benefit to the user. The proliferation of biometric security is a great example of this. My mobile phone and computer offer the ability for me to access the device by simply touching or staring at it. Because I am much more focused on how convenient and easy it is to unlock my phone to look at cat pictures, I forget that these controls were designed as a security measure.

As a security professional, I do, however, need some assurance that the controls can’t be easily circumvented. For example, a quick search for exploits of fingerprint or face-recognition systems will show that they can be easily fooled with a 3D printer, some Play-Doh and a little time. However, when enhanced with an additional factor like a password or PIN, the authentication mechanism evolves to something much more difficult to compromise while being considerably easier for me to remember than a 16-character password that I have to change every ninety days.

In Information Security, this is why it’s important for us to consider how we design solutions for our environment. If all I’m protecting is access to cat pictures, is my face or fingerprint unlock enough? I’d say so. But for my Database Administrator (DBA) or Identity and Access Management (IAM) administrator to protect my company’s crown jewels? Definitely not.

Creating controls with a purpose

And this is what I think brings us to the crux of security design: as an end-user, if I don’t know why the control is there, I won’t use it or I might even try and go around it. Moreover, if I have no idea that it’s there, it better work without getting in my way.

Let’s return to the car example. My daughter just finished the process of getting her driver’s license. In doing so, just like her old man, she was subject to videos depicting the horrors of car accidents and negligent driving. Way back in my day, the message was clear: driving death was thwarted by seatbelts and the ten-and-two. For her, it’s not texting and driving and the eight-and-four. I have absolutely no idea how a seatbelt can help me avoid an accident, but I’m crystal clear why I need one, should it happen. If I ask her about texting-and-driving, she’ll be equally clear that it’s possible to kill someone while doing it.

Getting back to the topic of security design, if I don’t understand why I need the control, it’s better that I have no awareness it’s around. Just like an airbag, I need to trust it’s there to protect me. On the flip side, I definitely need to know the importance of buckling up or putting my phone in the glovebox so I can keep my eyes on the road.

Transparent security

And this is what excites me about what we’re building at Code42 with our Code42 Next-Gen Data Loss Protection solution. Transparent security.

In the traditional Data Loss Prevention (DLP) space, transparent security is not an easy task. More often than not, people just trying to do their jobs end up getting blocked by a one-size-fits-all policy. Our application, on the other hand, enables security administrators to come together in a way that gives the business what they want: protection for their best ideas without Security getting in the way.

Computers, just like cars, can be dangerous and yet, each of us can’t imagine a life without them. Their utility demands they be safe and productive. As security professionals, we need to design controls that empower our business in the safest way possible, without getting in the way of where we’re trying to go.

Tips From the Trenches: Using Identity and Access Management to Increase Organizational Efficiencies and Reduce Risk

Tips From the Trenches: Using Identity and Access Management to Increase Efficiencies and Reduce Risk

As a security company, it’s imperative that we uphold high standards in every aspect of our security program. One of the most important and foundational of these areas is our Identity and Access Management (IAM) program. As part of Code42’s approach to this program, we have identified guiding principles that have a strong focus on automation. Below is an outline of our journey.

IAM guiding principles

Every IAM program should have guiding principles with HR, IT and security. Here are a few of ours:

1. HR would become the source of truth (SoT) for all identity lifecycle events, ranging from provisioning to de-provisioning.

The initial focus was to automate the provisioning and de-provisioning process, then address the more complex transfer scenario at a later phase. HR would trigger account provisioning when an employee or contractor was brought onboard, and shut off access as workers left the company. Further, the HR system would become authoritative for the majority of identity related attributes for our employees and contractors. This allowed us to automatically flow updates made to an individual’s HR record (e.g. changes in a job title or manager) to downstream connected systems that previously required a Help Desk ticket and manual updates.

2. Our objectives would not be met without data accuracy and integrity.

In-scope identity stores such as Active Directory (AD) and the physical access badge system had to be cleansed of legacy (stale) and duplicate user accounts before they were allowed to be onboarded into the new identity management process. Any user account that could not be matched or reconciled to a record in the SoT system was remediated. Although a rather laborious exercise, this was unquestionably worth it in order to maintain data accuracy.

3. Integrate with existing identity infrastructure wherever possible.

We used AD as our centralized Enterprise Directory, which continues to function as the bridge between on-prem and cloud identity broker, Okta. Integrating with AD was of crucial importance as this would allow us to centrally manage access to both on-premise and cloud based applications. When a worker leaves the company, all we need to do is ensure the user account is disabled in AD, which in turn disables the person’s access in Okta.

Once we had agreement on our guiding principles, it was time to start the design and implementation phase. We built our solution using Microsoft’s Identity Manager (MIM) because our IAM team had used Microsoft’s provisioning and synchronization engine in the past and found it to be easy to configure with many built-in connectors and extendable via .NET.  

IAM implementation phases

Identity in every organization is managed through a lifecycle. Below are two of the identity phases we have worked through and the solutions we built for our organization:

1. Automating provisioning and deprovisioning is key, but can also cause challenges.

One challenge we had was a lag between a new employee starting and employee records being populated in systems that act as the source of truth. This doesn’t allow lead time to provision a user account and grant access for the incoming worker. We solved this obstacle by creating an intermediate “SoT identity” database that mirrors the data we receive from our HR system. From there, we were able to write a simple script that ties to our service desk and creates the necessary database entry.

The next challenge was to automate the termination scenario. Similar to most companies, our HR systems maintain the user record long past an employee’s departure date for compliance and other reasons. Despite this, we needed a way to decommission the user immediately at time of departure. For this, we developed a simple Web Portal that allows our Helpdesk and HR partners to trigger termination. Once a user is flagged for termination in the Portal, the user’s access is automatically disabled by the identity management system. De-provisioning misses are a thing of the past!

2. Re-design and improve the access review process.

This phase aims to replace our current manual, spreadsheet-based, quarterly access certification process with a streamlined process using the built-in review engine in the identity management tool.

Implementing IAM at Code42 has been an awesome experience; and with the impending launch of the request portal, this year will be even more exciting! No matter how far along you are in your IAM implementation journey, I hope the concepts shared here help you along the way.

Code42-Tips-From-the-Trenches-Red-Teams-and-Blue-Teams

Tips From the Trenches: Red Teams and Blue Teams

In my most recent post, I wrote about the important role proactive threat hunting plays in a mature security program. Equally important to a well-designed program and closely related to hunting for threats is having a robust red team testing plan. Having a creative and dynamic red team in place helps to “sharpen the knife” and ensure that your security tools are correctly configured to do what they are supposed to do — which is to detect malicious activity before it has advanced too far in your environment.

“ It is much more challenging to build and maintain defensible systems than infiltrate them. This is one of the reasons why red team exercises are so important. ”

Red teams and blue teams

A red team’s mandate can range from assessing the security of either an application, an IT infrastructure or even a physical environment. For this post, I am referring specifically to general infrastructure testing, where the goal is to gain access to sensitive data by (almost) any means necessary, evaluate how far an attacker can go, and determine whether your security tools can detect or protect against the malicious actions. The red team attackers will approach the environment as if they are an outside attacker.

While your red team assumes the role of the attacker, your blue team acts the defender. It’s the blue team that deploys and manages the enterprise’s defenses. While the red team performs their “attack” exercises, there are many things your blue team can learn about the effectiveness of your company’s defenses — where the shortfalls are and where the most important changes need to be made.

Defining success

Before conducting a red team test, it helps to decide on a few definitions:

1. Define your targets: Without specifying what the critical assets are in your environment — and therefore what actual data an actual attacker would try to steal — your testing efforts will not be as valuable as they could be. Time and resources are always limited, so make sure your red team attempts to gain access to the most valuable data in your organization. This will provide you the greatest insights and biggest benefits when it comes to increasing defensive capabilities.

2. Define the scope: Along with identifying the data targets, it is essential to define the scope of the test. Are production systems fair game or will testing only be done against non-production systems? Is the social engineering of employees allowed? Are real-world malware, rootkits or remote access trojans permitted? Clearly specifying the scope is always important so that there aren’t misunderstandings later on.

How tightly you scope the exercise includes tradeoffs. Looser restrictions make for a more realistic test. No attacker will play by rules. They will try to breach your data using any means necessary. However, opening up production systems to the red team exercise could interrupt key business processes. Every organization has a different risk tolerance for these tests. I believe that the more realistic the red team test is, the more valuable the findings will be for your company.

Once you define your scope, make sure the appropriate stakeholders are notified, but not everybody! Telegraphing the test ahead of time won’t lead to realistic results.

3. Define the rules of engagement: With the scope of the test and data targets well defined, both the red team and the blue team should have a clear understanding of the rules for the exercise. For example, if production systems are in scope, should the defenders treat alarms differently if they positively identify an activity as part of the test? What are the criteria for containment, isolation and remediation for red team actions? As with scope, the more realistic you can make the rules, the more accurate the test will be, but at the potential cost of increased business interruption.

Making final preparations

Don’t end the test too quickly. A real attacker who targets your organization may spend weeks or even months performing reconnaissance, testing your systems and gathering information about your environment before they strike. A one-day red team engagement won’t be able to replicate such a determined attacker. Giving the red team the time and resources to mount a realistic attack will make for more meaningful results.

It’s also important to precisely define what success means. Often a red team attacker will gain access to targeted resources. This should not be seen as a failure on the part of the blue team. Instead, success should be defined as the red team identifying gaps and areas where the organization can improve security defenses and response processes — ultimately removing unneeded access to systems that attackers could abuse. A test that ends too early because the attacker was “caught,” doesn’t provide much in the way of meaningful insights into your security posture. An excellent red team test is a test that is comprehensive.

It’s important to note that defenders have the harder job, as the countless daily news stories about breaches illustrate. It is much more challenging to build and maintain defensible systems than infiltrate them. This is one of the reasons why red team exercises are so important.

Completing the test

Once the test is complete, the red team should share the strategies they used to compromise systems, and gain access or evade detection with the blue team. Of course, the red team should be documenting all of this during the test. Armed with this information, the blue team can determine how to harden the environment and create a bigger challenge for the red team during the next exercise.

We have a fantastic red team here at Code42. The team has conducted multiple tests of our infrastructure, and we have always found the results to be incredibly valuable. Any organization, no matter the size, can gain much more than they risk by performing red team testing.

As always, happy threat hunting!

Code42 Tips From the Trenches- Threat-Hunting Weapons

Tips From the Trenches: Threat-Hunting Weapons

When it comes to cybersecurity, too many enterprises remain on a reactive footing. This ends up being a drag on their efforts because, rather than getting ahead of the threats that target their systems, they spend too much of their time reacting to security alerts and incidents within their environments.

While being able to react to attacks quickly is important for any security team, it’s also important to get out in front of potential risks to identify threats lurking within your systems before they become active.

In this post, we’ll explain how threat hunting within one’s environment can help to break that reactive cycle and improve the effectiveness of any security program.

“ You don’t need a large security organization or any special security tools to start to proactively threat hunt; any security team can start threat hunting, and often using the tools they already have. ”

Threat hunting defined

Before going forward, let’s first take a step back and define what we mean by threat hunting. Essentially, threat hunting is the proactive search for evidence of undetected malicious activity or compromise. These threats can include anything from remote-access tools beaconing to an attacker’s command and control server to malicious actions of an employee or other trusted insider.

Threat hunting is essential for effective security for many reasons. First, defensive security technologies such as intrusion detection/prevention systems and anti-malware software will never successfully identify and block all malware or attacks. Some things are just going to get through. Second, by finding malware and threats that made it past your defenses, you’ll be able to more effectively secure your systems and make your environment much harder for attackers to exploit. Finally, getting adept at finding threats in your environment will improve your organization’s overall ability to respond to threats and, as a result, over time dramatically improve your security posture.

Your arsenal

Because threat hunting entails looking for things that have yet to trigger alerts — if they ever would trigger alerts, to begin with — it is important to look deeper for evidence of compromise. Fortunately, you don’t need a large security organization or any special security tools to start to proactively threat hunt; any security team can start threat hunting, and often using the tools they already have.

For instance, many of the data sources used in threat hunting will be found in firewall, proxy and endpoint logs. While these sources of data probably aren’t alerting on anything malicious, they still hold a considerable amount of security data that can point to potential indicators that an environment has been breached under their radar.

Other readily available tools are helpful for threat analysis, such as Bro (https://www.bro.org/), RITA (https://github.com/activecm/rita), or OSQuery (https://osquery.io/). These tools will help provide additional visibility into network and endpoint data that could provide insights into potential compromise. With these tools, teams can monitor internal network activity, such as virus outbreaks and lateral movements of data. Monitoring East-West network traffic in addition to what is moving through the firewall provides critical insights to the overall health of your network.

The investigation capabilities of Code42 Next-Gen Data Loss Protection (DLP) can be extremely helpful for threat hunting, for determining how widespread a file is distributed in the environment, and to give information about file lifecycle, all of which provide context around whether a file is business-related or suspicious. For example, with Code42 Next-Gen DLP, you can search by MD5 hash or SHA-256 to find all instances of a sensitive file in your organization, or determine if known malware has been detected in your organization.

New tools and new ways of thinking may seem overwhelming at first. However, threat hunting doesn’t have to be all-consuming. You can start with committing a modest amount of time to the hunt, and incrementally build your threat hunting capability over weeks and months to find malicious files and unusual activity. Also, as a direct benefit to your security program you will be able to eliminate noise in your environment, better tune your security tools, find areas of vulnerability and harden those areas, and enhance your security posture at your own pace.

Now, get hunting.

Tips-From-the-Trenches--Enhancing-Phishing-Response-Investigations

Tips From the Trenches: Enhancing Phishing Response Investigations

In an earlier blog post, I explained how the Code42 security team is using security orchestration, automation and response (SOAR) tools to make our team more efficient. Today, I’d like to dive a little deeper and give you an example of how we’re combining a SOAR tool with the Code42 Forensic File Search API — part of the Code42 Next-Gen Data Loss Protection (DLP) product —  to streamline phishing response investigations.

A typical phishing response playbook — with a boost

Below is a screenshot of a relatively simple phishing response playbook that we created using Phantom (a SOAR tool) and the Code42 Forensic File Search API:

We based this playbook on a phishing template built into the Phantom solution. It includes many of the actions that would normally be applied as a response to a suspicious email — actions that investigate and geolocate IP addresses, and conduct reputation searches for IPs and domains. We added a couple of helper actions (“deproofpoint url” and “domain reputation”) to normalize URLs and assist with case management.

You may have noticed one unusual action. We added “hunt file” via the Code42 Forensic File Search API. If a suspicious email has an attachment, this action will search our entire environment by file hash for other copies of that attachment.

“ Combining the speed of Code42 Next-Gen DLP with the automation of SOAR tools can cut response times significantly. ”

What Code42 Next-Gen DLP can tell us

Applying Code42 Next-Gen DLP to our playbook shortens investigation time. The “hunt file” action allows us to quickly see if there are multiple copies of a malicious file in our environment. If that proves to be true, it is quick evidence that there may be a widespread email campaign against our users. On the other hand, the search may show that the file has a long internal history in file locations and on endpoints. This history would suggest that the file exists as part of normal operating procedure and that we may be dealing with a false alarm. Either way, together the Code42 Next-Gen DLP API and its investigation capability give us additional file context so our security team can make smarter, and more informed and confident decisions about what to do next.

Applying Code42 Next-Gen DLP to other threat investigations

This type of “hunt file” action does not need to be limited to investigating suspected phishing emails. In fact, it could be applied to any security event that involves a file — such as an anti-virus alert, an EDR alert or even IDS/IPS alerts that trigger on file events. Using Code42 Next-Gen DLP, security staff can determine in seconds where else that file exists in the environment and if any further action is necessary.

Combining the speed of Code42 Next-Gen DLP with the automation of SOAR tools can cut response times significantly. That’s something any security team can appreciate.

As always, happy threat hunting!