As a security software company, it’s essential that everyone at Code42 thoroughly understands the security industry. This is true for nearly every position. Our sales teams need to fully understand the needs of our customers—and human resources need to understand security as they recruit candidates in the security industry, where it’s highly competitive to find the requisite talent.
Marketing clearly needs to understand not only the big-picture security needs of our customers, but also the daily life and day-to-day challenges of a security analyst. Furthermore, as security becomes an integral component in DevSecOps, developers need to better understand application security, which means that security folks also need to up their code writing skills.
Of course, not everyone requires the deep depth-of-knowledge one would expect to find with a professional security team, but everyone who works at a security software company should understand security basics. With that goal in mind, we have created the new Security Ninja program designed to teach security and enable employees to earn new belts as their mastery progresses. These belts start with a white belt and culminate with a black belt, which requires a security certification to earn. These Code42 security ninjas will become our security ambassadors within the company.
This self-driven program, which begins when an employee registers to earn a belt, can be completed per an employee’s individual schedule. Credits are allocated by time spent learning and consist of a mix of free training that can be found online, including through YouTube videos, attending a security lunch, and learning and sharing their learnings on our company’s Slack channel. When an employee does share his or her lessons learned on our internal Slack channels, it makes me smile because we now have employees who are teaching each other what they know about information security.
For security awareness teams, watching employees gain more security knowledge that exceeds what is required for compliance, is literally a dream come true. These trainings are no cakewalk, mind you: The belts require the applicant to not be late on any of his/her security or privacy trainings, and the applicants must not have clicked on a link in a test phishing email. If they do, they can apply to continue their training in the following quarter. Since we implemented the Ninja program last January, we’ve seen our training completions rise and fewer links in phishing tests clicked. This is a huge win.
To keep engagement high, we’ve built the program to be competitive and also fun and lighthearted. We regularly communicate about the program on our company-wide Slack channel. Some managers have set goals for their teams to gain their belts and initiate a bit of friendly competition in the process. Our sales teams are thrilled to expand their security expertise to better understand our customers and prospects and to speak their language.
Here’s how applicants earn their belts: First, they must provide evidence of completion on the learning activities they chose, even if it’s just a screenshot. Once they’ve gained the required amount of training credits, applicants can then take an online exam in our Learning Management System (LMS). At the end of the quarter, the LMS list of successful exam completions becomes my starting list to check off evidence submitted by each applicant. I check evidence “audit style” by randomly selecting people to audit; the truth is, however, that I’m so thrilled at the work they are all doing that I tend to review all evidence submitted, especially the “lessons learned.” There is no greater sense of satisfaction for a security awareness professional.
Each quarter, we celebrate all of the new ninjas and award them their “belt,” i.e., a colored badge with an outline of a ninja. The ninjas can attach the belt to their badge holder or lanyard to proudly display their ninja level status. Of course, we have fun with this, too, by inviting everyone to our main meeting area and provide donuts for their accomplishments. We call it “Donuts in the Dojo,” and our CISO is there to congratulate everyone on their newfound security expertise.
This is not only a win for the security team, it’s also a win for the employees. They can more confidently navigate the world of security professionals and better understand our customers. All of this means it’s a huge win for Code42.
Insider Threat Is Real
See how Code42 Next-Gen Data Loss Protection provides visibility to file exfiltration and intellectual property theft from departing employees.
Slack, the popular collaboration tool, got more than its
share of media attention last month. All this Slack buzz gives us an
opportunity to share how we use Slack here at Code42. We’ve thoroughly vetted
Slack, and rather than banning it as a security risk, we actually use the tool
to enhance our security capabilities.
Why Code42 uses Slack
So, what about those security concerns? Any tool that
facilitates the sharing of information brings some risk of user abuse or error ,
such as oversharing, mis-sharing, etc. That’s true for Slack, just as it’s true
for Google Docs, Dropbox — and even, yes, Microsoft Teams. Just like our
approach to data loss protection, our internal security strategy takes an
honest look at risk mitigation that focuses on the biggest risks — without
unnecessarily impeding productivity, collaboration and innovation. Like all our
third-party vendors, we hold Slack to our rigorous vendor security standard,
which includes an annual vendor security risk reassessment process. Moreover,
we’ve put security controls in place that balance the need to mitigate the
inherent risks of information-sharing with the productivity and innovation
value of the tool itself.
How we use Slack
At Code42, nearly every employee uses Slack every day for
real-time direct messaging, increasing productivity and helping us deliver on
one of our core company values: Get it Done, Do it Right. The Code42 security
team, in particular, leverages Slack in unique and powerful ways. Here are a couple ways we have integrated Slack
functionality to improve our internal security program:
Security alert notifications: Slack’s Incoming WebHooks allow you to connect applications and services to your Enterprise Slack. We use this capability to implement security notifications tied to activities in our security applications, which are then posted in a corresponding Slack channel. This provides our security analysts and partners across the business with real-time alerts right in the application where they are already communicating and collaborating throughout the day, helping them take appropriate and timely action.
For instance, we have created private channels to alert on critical events within different environments, such as alerts from Capital One’s Cloud Custodian. The alerts are based on policy violations that we define in YAML policy files. Cloud Custodian then alerts our team — and takes action when needed. For example, if Cloud Custodian sees an S3 bucket configured as public, it will make it private by changing permissions in the access control lists (ACLs) and bucket policies — and then notify our teams of the change via Slack as depicted below.
Screenshot of Slack’s Incoming WebHooks tool:
Security news and updates: Our security team also created a public channel (open to everyone at Code42) as a collaborative workspace for all users. The public channel enables staff to crowdsource and share security knowledge, and to have discussions around the latest security news. Anyone can post security articles, whitepapers, podcasts, blogs or news — highlighting interesting ideas — and weighing in on each other’s responses. This channel acts as a security news feed, delivering just-in-time security-related information to employees to keep them aware of the latest security threats and trends. Code42 employees also often post what they are seeing in their own news feeds as they become more security savvy.
Walking the Talk
At Code42, we talk a lot about the fundamental paradox of enterprise information security: Information-sharing is both the key to success — and the biggest risk — in organizations. The smart approach focuses on controlling the risk, so you can unlock that value. We’ve vetted Slack and put security controls in place, so we can leverage its capabilities to fuel collaboration, enhance productivity and improve our internal security capabilities. Slack integrates with our security tools for real-time alerting and allows us to quickly disseminate security knowledge throughout the organization. Our internal use of Slack demonstrates how we walk the talk in our own approach to information security.
For more details on Cloud Custodian, read our post Tips From the Trenches: Cloud Custodian – Automating AWS Security, Cost and Compliance.
It’s been a year since we — and many of you — went live with enhancements to our privacy and security programs tied to GDPR, and two years since we started the GDPR journey. That’s why it’s a great time to look back at the impact GDPR has had on the way we do business.
This post is purely for general information purposes and is not intended as legal advice. This blog gives a glimpse into Code42’s early GDPR implementation. We, along with GDPR as well as other national and international privacy rules, will continue to evolve and mature.
“ The GDPR journey shouldn’t be a one-department initiative or the sole responsibility of Legal or Security. It must be a business-driven initiative with Legal and Security providing recommendations and guidance. ”
What we did to get ready for May 2018
We started preparing for GDPR around May 2017. The GDPR journey shouldn’t be a one-department initiative or the sole responsibility of Legal or Security. It must be a business-driven initiative with Legal and Security providing recommendations and guidance. At Code42, we established a cross-functional group comprised of Legal, Security, IT and system subject matter experts. The key activities of this group were to:
Create an inventory of applications in scope for GDPR. We have European employees and customers so we had to look at applications that were both internal and customer-impacting. When outlining in-scope applications for GDPR, we kept in mind that more restrictive data privacy laws seem imminent in the U.S. We also conducted a cost-benefit analysis to determine whether we should keep non-EU PI in scope now or revisit it at a later date.
Define retention periods for all of the applications in scope. Prior to our GDPR journey, we had a retention program in place, but it was largely focused on data we knew we had legal, regulatory or other compliance obligations around, including financial records, personnel files, customer archives and security logs. GDPR just gave us the nudge we needed to mature what we were already committed to and have better conversations around what other data we were storing and why.
Figure out how to purge personal data from applications. This may be challenging for SaaS organizations. When applications are managed on premise, it’s much easier to delete the data when you no longer need it. But translating that to all your SaaS applications is another story. There are a few areas where SaaS applications are still maturing compared to their on-prem counterparts, and data deletion appears to be one of them. Delete (or anonymize) data, where you can. Otherwise, either add the applications to a risk register, requesting that the application owner do a risk accept and submit a feature request to the vendor, or look for a new vendor who can meet your retention requirements.
Create an audit program to validate compliance with our security program. We are fortunate to have an awesome internal audit program that monitors effectiveness of our security program, among other IT and technology-related audit tasks. So it was logical to test our in-scope applications against our newly defined retention requirements. We review applications periodically.
And lastly, but just as important, define a process for data subjects to request that their information be deleted outside of a standard retention schedule (aka “right to be forgotten”). It is important to remember that this is not an absolute. While we want to honor a data subject’s request as much as possible, there may be legitimate business cases where you may need to maintain some data. The key for us was defining what those legitimate business cases were so we could be as transparent as possible if and when we received a request.
What we’ve learned in the last year
So what have we learned about GDPR one year and two internal audits later? A lot.
What’s going well
1. A vendor playing nice
We had a really great success story early on with one vendor. When we dug into it, we found that our users were previously set up with the ability to use any email address (not just a Code42 email). We also learned our instance was configured to save PII that wasn’t a necessary business record. Based on that conversation, we were able to make a few configuration changes and actually take that application out of scope for GDPR!
2. A more robust application lifecycle program and greater insight into the actual cost of a tool
As a technology company that is continually innovating, we want to empower our users to use tools and technologies that excite them and increase productivity. At the same time, we want to ensure we are addressing security, privacy and general business requirements. Users often find tools that are “so cheap” in terms of the cost of user licenses. Our new Application Lifecycle Management (ALM) process, however, gives us a better sense of the actual cost of a new tool when we factor in:
Onboarding requirements: Think Legal, Security, IT, Finance. Are there compliance requirements? Do we already have similar tools in place?
Audit requirements: Will this be part of the GDPR data retention audit, user access audit or other application audit?
Stand-up/stand-down requirements: Will it integrate with single sign-on solution? How does it integrate with other tools? How is data returned or destroyed?
Support requirements: Who are users going to contact when they inevitably need help using the tool?
When the person making the request can see all of the added costs going into this “inexpensive” tool, it makes for easier discussions. Sometimes we’ve moved forward with new tooling. Other times we’ve gone back to existing tools to see if there are features we can take advantage of because the true “cost” of a new solution isn’t worth it.
3. A great start toward the next evolution of privacy laws
On the heels of GDPR, there has been a lot of chatter about the introduction of more robust state privacy laws and potentially a federal privacy law. While future regulations will certainly have their own nuances, position yourselves to comply with them in a way that will require just small tweaks versus major lifts like the GDPR effort.
What’s not working
1. What exactly IS personal data?
We have had a lot of conversations about what data was in scope… and I mean A LOT. According to the GDPR, personal data is defined as any information related to an identified or identifiable natural person. That puts just about every piece of data in scope. And while it may seem like an all-or-nothing approach may be easier, consider risks that could affect things like availability, productivity, retention, etc. when implementing controls, then scope programs appropriately to address those risks in a meaningful way.
2. “Yes, we are GDPR compliant!”
One thing we realized very quickly was that it wasn’t enough to simply ask our vendors if they were “GDPR compliant.” We ended up with a lot of “Yes!” answers that upon further investigation were definite “No’s.” Some lessons learned:
Understand the specific requirements you have for vendors: Can they delete or anonymize data? Can they delete users?
Whenever possible, schedule a call with your vendors to talk through your needs instead of filing tickets or emailing. We found it was much easier to get answers to our questions when we could talk with a technical representative.
Ask for a demo so they can show you how they’ll delete or anonymize data and/or users.
Don’t rely on a contractual statement that data will be deleted at the end of a contract term. Many tools still aren’t able to actually do this. It’s important that you know what risks you are carrying with each vendor.
Audit your vendors to ensure they are doing what they said they would.
Would we do it all over again?
Actually, yes. While our GDPR project caused some grumbling and frustration at the beginning, it has now become an integrated part of how we operate. There is no panic and no annoyance. Instead, there are lots of great proactive conversations about data. At the end of the day, we have matured our tool management, and our privacy and security; and our data owners feel a stronger sense of data ownership.
Wanna see a sample of our Application Lifecycle Management (ALM) vetting checklist?
Insider Threat Is Real
See how Code42 Next-Gen Data Loss Protection provides visibility to file exfiltration and intellectual property theft from departing employees.
Yeah. You read that right. I’m an information security analyst now, but it wasn’t long ago that I was living in the heart of Silicon Valley…selling mattresses!
So there I was, in my early 20s. I’d missed the first .com gold rush, I had no degree and I basically used my laptop to play World of Warcraft. But, selling mattresses DID give me some advantages. Besides being extremely lucrative at the time, no one bought mattresses online yet, “product testing” consisted of taking naps on expensive beds, making sure the massage chairs worked properly and getting paid to talk to people about sleeping — a favorite pastime of mine to this day. I had a lot of downtime…so, I started studying.
After a short stint in banking, I landed a sales gig at a tech startup. I was 33 and just getting into the technology space. Sales is a hard habit to kick!
Next, I was living in Minnesota and looking for yet another sales gig. This time in Silicon Prairie. At this point, I’d heard of Code42 and knew that’s where I wanted to be. I told my soon-to-be director that I didn’t care what the role was, I wanted in. I knew I could figure things out from there. A week later, I was on an amazing business development team.
“ I’m not saying information security is for everybody, I’m saying information security is for anybody with the drive and passion to self educate, move outside your comfort zone and be brave enough to introduce yourself to perfect strangers! ”
By now you’re asking, “What does any of this have to do with information security?” At least I would be. Hang in there, we’re close. The context here matters. Understand that at this point, I’d been in sales for more than twenty years!
Then, two things happened. First, I attended what we call “Experience Week.” Essentially, it’s a week of getting to know the leadership team, the culture and our co-workers at Code42. Our CEO Joe Payne got up to speak. I’m sure it was informative and truly inspirational but I mostly remember one thing he said, “Here at Code42 we have a value: Get it done. Do it right. And if you’re getting it done and doing it right and you want to do something else, tell us. We’ll help in any way we can.” Sometimes you hear these things from leadership, and it doesn’t actually mean anything. But I decided to put this to the test.
At the same time, I just happened to be reading “Managing Oneself” by Peter F. Drucker (a must-read for any professional BTW). There was one statement that hit me like a ton of bricks: “After 20 years of doing very much the same kind of work, people are very good at their jobs…and yet they are still likely to face another 20 if not 25 years of doing the same kind of work. That is why managing oneself increasingly leads them to begin a second career.” This was becoming a theme for me, so I figured this was my chance to leap out of my comfort zone and reach for something exciting!
I knew, with every bone in my body, I did NOT want to spend the next 20+ years of my professional life generating my income by convincing others to part with theirs. So, now what?
Well, after consulting with my personal board of directors and a whole lot of prayer, I took a look at the digital landscape and knew I wanted to transition into security. The decision was based on learning some key elements of the security space:
There is currently 3 million unfilled cybersecurity positions globally. ((ISC)2 Workforce Study)
52% of CISO respondents named “communication & people skills” as a top quality in potential candidates. (Dark Reading)
No IT degree required!
Opportunity? Check. Can I talk to people? Double check. No IT degree required? Check. (And, whew!)
Evan Francen of FRSecure is fond of saying, “Get into security! There’s plenty of work to go around.” OK…thanks Evan! Uhhh, how?
“ Luckily, there is an exhaustive amount of resources available in the wild for anyone curious enough to look. ”
Luckily, there is an exhaustive amount of resources available in the wild for anyone curious enough to look. Believe me, I checked out every free resource known to man. But while I was building knowledge, I wondered if it would be enough to get my foot in the door. My inner sales guru said, “No grasshopper, you need to meet people who can help.” I’d say to anyone at this point — what really makes a difference for someone without the degrees or the experience is your ability to demonstrate passion and enthusiasm for security and a real desire to establish and foster genuine relationships with folks that are already in the security world. My new contacts in security had that passion — and I needed to show I did, too!
With our internal security team I sought out and requested time to chat with anyone who would humor me, peppered them with questions and afterward, made sure to send them each a handwritten ‘thank you’ note.
Second, and probably the most important, I ACTED on their suggestions. The worst thing you can do is ask people for their advice and then completely ignore their recommendations.
By this point I had the bug and I wasn’t going to take no for an answer. I even took my sales skills on a road show. Here’s what I did:
I took PTO to attend security conferences and trade shows.
I found security happy hours and meetups where I could network with other security professionals.
I found no shame in doggedly hounding my CISO to give me a shot.
I found opportunities to interact with her and the security team. Even going so far as to show up, front row, to a panel discussion she was speaking on ABOUT the talent shortage in the security field. A bit creepy? Sure. Effective? Well, two months later I was offered a role as an information security analyst.
I’m not saying information security is for everybody, I’m saying information security is for anybody with the drive and passion to self educate, move outside your comfort zone and be brave enough to introduce yourself to perfect strangers! You don’t have to be super technologically savvy (although that certainly helps) or have a masters in computer science, or be some hacker in a basement wearing a black hoodie bent over a keyboard trying to take down “the man.”
Start with taking a look at the industry — do your research, make sure to network with people (security folks are often excited to share their knowledge), be a part of something bigger than yourself and want to be one of the good guys! Teaching people security is easy — it’s having the chops and the drive that’s up to you.
In the old days, security teams and engineering teams were highly siloed: security teams were concerned with things like firewalls, anti-virus and ISO controls, while engineering teams were concerned with writing and debugging code in order to pass it along to another team, like an operations team, to deploy. When they communicated, it was often in the stilted form of audit findings, vulnerabilities and mandatory OWASP Top Ten training classes that left both sides feeling like they were mutually missing the point.
While that may have worked in the past, the speed at which development happens today means that changes are needed on both sides of the equation to improve efficiency and reduce risk. In this blog post, I’ll be talking about why security teams need to learn to code (the flip side of the equation, why engineering teams need to learn security, may be a future blog post).
“ Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. ”
While it’s not uncommon for people to come into security having done code development work in the past, it is not necessarily the most typical career path. Oftentimes, people come into the security realm without any coding experience other than perhaps a Java or Python course they took at school or online. Because security encompasses so many different activities, there would appear to be no downside if security folks outside of a few highly specialized roles, like penetration testing, didn’t have coding experience. However, I’m here to tell you that coding can be beneficial to any security professional, no matter the role.
Let’s start with automation. No matter what you are doing in security, odds are that you have some kind of repeatable process, such as collecting data, doing analysis, or performing some action, that you can automate. Fortunately, more and more applications have APIs available to take advantage of, and are therefore candidates for writing code to do the work so you don’t have to.
At this point, you may think that this sounds a lot like a job for a Security Orchestration Automation and Response (SOAR) tool. A SOAR tool can absolutely be used to automate activities, but already having a SOAR tool is certainly not a requirement. A simple script that ties together a couple of applications via an API to ingest, transform and save data elsewhere may be all you need in order to start getting value out of coding. Plus, this can be a great way to determine how much value you may be able to get out of a full-blown SOAR tool.
Learning to code won’t just help your own efficiency. Writing your own code can help make all of those OWASP Top Ten vulnerabilities much more concrete, which can lead to better security requirements when collaborating with engineers. Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. It’s also incredibly valuable to be able to give engineers concrete solutions when they ask about how to remediate a particular vulnerability in code.
Here at Code42, our security team believes strongly in the value of learning to code. That’s why we’ve set a goal for our entire security team, no matter the role, to learn how to code and to automate at least one repetitive activity with code in 2019. By doing this, we will make our overall security team stronger, work more efficiently and provide more valuable information to our engineering teams.
No matter what we do in our jobs, we all want to provide value back to the organizations where we work. With some jobs, tangible evidence of value is very apparent, such as hitting your sales quota or building code for a feature in your software. In business continuity, that can be a bit of a challenge. To start, most people don’t understand what it is, or what responsibilities are tied to it. If someone asks me what I do, and my response is: “business continuity,” the conversation usually goes a different direction shortly thereafter. This makes it a challenge from the get-go in showing value to your company.
“ If ensuring value to the company is at the center of your decisions, it will go a long way in leading to a successful business continuity program. ”
Here are a few key principles I have learned in my business continuity journey, that have helped me show value within my organization:
Real simple, your business continuity program has to have this in order to succeed. If you think you’re fully prepared to respond and recover from a disaster without buy-in from leadership, you’re kidding yourself. Leadership needs to understand what you’re doing, why you’re doing it and how it will benefit their department and the company as a whole. This will give you top-level support and make your job easier. Having guidance from above will ensure your requests for resources for the purposes of a business impact analysis and recovery testing will be given.
No doubt getting leaderships attention can be a challenge, but it has to happen. I have been a part of organizations that didn’t have it, and the result was a program that could never meet its full potential because our requests for time and effort from other departments were never a priority.
At Code42, we worked with each member of our executive leadership team to outline what we were doing, why we’re doing it and what assistance we would need from their department. Department leaders were then able to give direction on who they wanted us to work with and set the whole program in motion.
Narrow the scope of your program
On the surface this seems counterintuitive. Why not cover every function and supporting activity? The reasoning is that most companies don’t have a dedicated team of employees focused on business continuity. For some, business continuity is simply one of many responsibilities they hold. Along with manpower, the further you head into supporting functions and away from what’s really critical, the lower the rate of return for the company. The key is to focus on what’s critical. I have experienced it firsthand, where my drive to make sure all business functions were documented and prepared for. It had me spending countless hours covering the full spectrum of the business. By the time I was finished, the data was already out of date and amounted to poor use of resources with little to no value for the company.
When we worked with each member of the executive leadership team at Code42, we kept our scope to the top two critical functions that each department performs. This helped our program avoid the minutiae and focus squarely on what’s critical for supporting our product, our customers and our employees.
Make the information accessible
The information for your business continuity program should not be sequestered away from your employees, it should be easy to view and update. This is a rather obvious statement, but one that I have seen many companies struggle with. Here at Code42, we made a misstep by thinking the solution to our business continuity challenges lie within a continuity software provider. The intent was for it to help us manage all of our data, produce plans and be a one-stop shop for all things business continuity. Not long after onboarding, challenges started to emerge. The biggest challenge, was the information was not accessible to the workforce. The other was that it didn’t tie in to any software already in use at Code42. It was on an island, and of little to no value to the business. A pivot was needed, and thankfully we didn’t have to go far for an answer.
The answer came from taking a step back and determining what tools employees use across the company on a day-to-day basis. For us, the answer laid within Confluence, which serves as our internal wiki. This is where we build out department focused pages and their respective critical functions, and dependencies. Connecting to Confluence allowed us to tie in another company-wide application, JIRA, for tickets related to vendor assessments and risk and incident tickets. Our focus throughout the process was to ensure value was being passed on to Code42 and its employees, and the key piece to that was having information easily accessible.
Business continuity has a number of inherent challenges, but if ensuring value to the company is at the center of your decisions it will go a long way in leading to a successful program. I hope these principles I laid out help you provide better value to your own company.
“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?
After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.
“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”
To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:
Compliance with security policies
AWS tagging requirements
Identifying unused resources for removal/review
Off-hours are enforced to maximize cost reduction
Encryption needs are enforced
AWS Security Groups are not over permissive
And many more…
After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.
How Code42 uses Cloud Custodian
Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.
As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.
As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.
“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”
Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.
Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more. The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.
What policies did we implement?
There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.
Cost Savings Policy – ec2-off-hours-enforcement
EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.
Hygiene Policy – ec2-tag-enforcement
AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.
The policy applies filters to look for all EC2 resources missing the required tags.
When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.
If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.
This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.
At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.
This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket. It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.
Just starting out?
Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.
Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.
One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.
“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”
Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:
Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.
Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.
We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.
This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.
New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.
“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”
So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.
But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.
We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.
Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.
Part of the success criteria for any security program is to ensure the process, control or technology utilized has some additional benefit aside from just making things “more secure.” Most controls we impose to make ourselves safer often come at the expense of convenience. But what if we took a different approach when thinking about them? A mentor of mine often starts a security design discussion by asking us to consider the following:
Why do cars have brakes?
Naturally, my first thought is that brakes allow the driver to slow or stop when going too fast. After all, a car with no brakes is dangerous, if not completely useless. However, when we consider that the braking system in the car enables the driver to go as fast as they want, the purpose of this control takes on a new meaning.
Changing perceptions about the controls we impose on security design within Information Security doesn’t come easy. Even some of the most seasoned infosec professionals will insist a particular control be in place without considering how the control impacts workflow, or worse, the bottom line.
“ As security professionals, we need to design controls that empower our business in the safest way possible, without getting in the way of where we’re trying to go. ”
Aligning controls and risks
Some of the most impactful security controls are the ones we don’t even realize are there. When designed correctly, they mitigate risk while providing a benefit to the user. The proliferation of biometric security is a great example of this. My mobile phone and computer offer the ability for me to access the device by simply touching or staring at it. Because I am much more focused on how convenient and easy it is to unlock my phone to look at cat pictures, I forget that these controls were designed as a security measure.
As a security professional, I do, however, need some assurance that the controls can’t be easily circumvented. For example, a quick search for exploits of fingerprint or face-recognition systems will show that they can be easily fooled with a 3D printer, some Play-Doh and a little time. However, when enhanced with an additional factor like a password or PIN, the authentication mechanism evolves to something much more difficult to compromise while being considerably easier for me to remember than a 16-character password that I have to change every ninety days.
In Information Security, this is why it’s important for us to consider how we design solutions for our environment. If all I’m protecting is access to cat pictures, is my face or fingerprint unlock enough? I’d say so. But for my Database Administrator (DBA) or Identity and Access Management (IAM) administrator to protect my company’s crown jewels? Definitely not.
Creating controls with a purpose
And this is what I think brings us to the crux of security design: as an end-user, if I don’t know why the control is there, I won’t use it or I might even try and go around it. Moreover, if I have no idea that it’s there, it better work without getting in my way.
Let’s return to the car example. My daughter just finished the process of getting her driver’s license. In doing so, just like her old man, she was subject to videos depicting the horrors of car accidents and negligent driving. Way back in my day, the message was clear: driving death was thwarted by seatbelts and the ten-and-two. For her, it’s not texting and driving and the eight-and-four. I have absolutely no idea how a seatbelt can help me avoid an accident, but I’m crystal clear why I need one, should it happen. If I ask her about texting-and-driving, she’ll be equally clear that it’s possible to kill someone while doing it.
Getting back to the topic of security design, if I don’t understand why I need the control, it’s better that I have no awareness it’s around. Just like an airbag, I need to trust it’s there to protect me. On the flip side, I definitely need to know the importance of buckling up or putting my phone in the glovebox so I can keep my eyes on the road.
And this is what excites me about what we’re building at Code42 with our Code42 Next-Gen Data Loss Protection solution. Transparent security.
In the traditional Data Loss Prevention (DLP) space, transparent security is not an easy task. More often than not, people just trying to do their jobs end up getting blocked by a one-size-fits-all policy. Our application, on the other hand, enables security administrators to come together in a way that gives the business what they want: protection for their best ideas without Security getting in the way.
Computers, just like cars, can be dangerous and yet, each of us can’t imagine a life without them. Their utility demands they be safe and productive. As security professionals, we need to design controls that empower our business in the safest way possible, without getting in the way of where we’re trying to go.
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.
As a security company, it’s imperative that we uphold high standards in every aspect of our security program. One of the most important and foundational of these areas is our Identity and Access Management (IAM) program. As part of Code42’s approach to this program, we have identified guiding principles that have a strong focus on automation. Below is an outline of our journey.
IAM guiding principles
Every IAM program should have guiding principles with HR, IT and security. Here are a few of ours:
1. HR would become the source of truth (SoT) for all identity lifecycle events, ranging from provisioning to de-provisioning.
The initial focus was to automate the provisioning and de-provisioning process, then address the more complex transfer scenario at a later phase. HR would trigger account provisioning when an employee or contractor was brought onboard, and shut off access as workers left the company. Further, the HR system would become authoritative for the majority of identity related attributes for our employees and contractors. This allowed us to automatically flow updates made to an individual’s HR record (e.g. changes in a job title or manager) to downstream connected systems that previously required a Help Desk ticket and manual updates.
2. Our objectives would not be met without data accuracy and integrity.
In-scope identity stores such as Active Directory (AD) and the physical access badge system had to be cleansed of legacy (stale) and duplicate user accounts before they were allowed to be onboarded into the new identity management process. Any user account that could not be matched or reconciled to a record in the SoT system was remediated. Although a rather laborious exercise, this was unquestionably worth it in order to maintain data accuracy.
3. Integrate with existing identity infrastructure wherever possible.
We used AD as our centralized Enterprise Directory, which continues to function as the bridge between on-prem and cloud identity broker, Okta. Integrating with AD was of crucial importance as this would allow us to centrally manage access to both on-premise and cloud based applications. When a worker leaves the company, all we need to do is ensure the user account is disabled in AD, which in turn disables the person’s access in Okta.
Once we had agreement on our guiding principles, it was time to start the design and implementation phase. We built our solution using Microsoft’s Identity Manager (MIM) because our IAM team had used Microsoft’s provisioning and synchronization engine in the past and found it to be easy to configure with many built-in connectors and extendable via .NET.
IAM implementation phases
Identity in every organization is managed through a lifecycle. Below are two of the identity phases we have worked through and the solutions we built for our organization:
1. Automating provisioning and deprovisioning is key, but can also cause challenges.
One challenge we had was a lag between a new employee starting and employee records being populated in systems that act as the source of truth. This doesn’t allow lead time to provision a user account and grant access for the incoming worker. We solved this obstacle by creating an intermediate “SoT identity” database that mirrors the data we receive from our HR system. From there, we were able to write a simple script that ties to our service desk and creates the necessary database entry.
The next challenge was to automate the termination scenario. Similar to most companies, our HR systems maintain the user record long past an employee’s departure date for compliance and other reasons. Despite this, we needed a way to decommission the user immediately at time of departure. For this, we developed a simple Web Portal that allows our Helpdesk and HR partners to trigger termination. Once a user is flagged for termination in the Portal, the user’s access is automatically disabled by the identity management system. De-provisioning misses are a thing of the past!
2. Re-design and improve the access review process.
This phase aims to replace our current manual, spreadsheet-based, quarterly access certification process with a streamlined process using the built-in review engine in the identity management tool.
Implementing IAM at Code42 has been an awesome experience; and with the impending launch of the request portal, this year will be even more exciting! No matter how far along you are in your IAM implementation journey, I hope the concepts shared here help you along the way.
It’s Time to Rethink DLP
See how reframing DLP around protection rather than prevention delivers powerful benefits.