Addressing the Security Talent Shortage From Within - Code42 Blog

Tips From the Trenches: How I Moved From Mattress Sales to Malware

Yeah. You read that right. I’m an information security analyst now, but it wasn’t long ago that I was living in the heart of Silicon Valley…selling mattresses!

So there I was, in my early 20s. I’d missed the first .com gold rush, I had no degree and I basically used my laptop to play World of Warcraft. But, selling mattresses DID give me some advantages. Besides being extremely lucrative at the time, no one bought mattresses online yet, “product testing” consisted of taking naps on expensive beds, making sure the massage chairs worked properly and getting paid to talk to people about sleeping — a favorite pastime of mine to this day. I had a lot of downtime…so, I started studying.

After a short stint in banking, I landed a sales gig at a tech startup. I was 33 and just getting into the technology space. Sales is a hard habit to kick!

Next, I was living in Minnesota and looking for yet another sales gig. This time in Silicon Prairie. At this point, I’d heard of Code42 and knew that’s where I wanted to be. I told my soon-to-be director that I didn’t care what the role was, I wanted in. I knew I could figure things out from there. A week later, I was on an amazing business development team.

“ I’m not saying information security is for everybody, I’m saying information security is for anybody with the drive and passion to self educate, move outside your comfort zone and be brave enough to introduce yourself to perfect strangers! ”

By now you’re asking, “What does any of this have to do with information security?” At least I would be. Hang in there, we’re close. The context here matters. Understand that at this point, I’d been in sales for more than twenty years!

Then, two things happened. First, I attended what we call “Experience Week.” Essentially, it’s a week of getting to know the leadership team, the culture and our co-workers at Code42. Our CEO Joe Payne got up to speak. I’m sure it was informative and truly inspirational but I mostly remember one thing he said, “Here at Code42 we have a value: Get it done. Do it right. And if you’re getting it done and doing it right and you want to do something else, tell us. We’ll help in any way we can.” Sometimes you hear these things from leadership, and it doesn’t actually mean anything. But I decided to put this to the test.

At the same time, I just happened to be reading “Managing Oneself” by Peter F. Drucker (a must-read for any professional BTW). There was one statement that hit me like a ton of bricks: “After 20 years of doing very much the same kind of work, people are very good at their jobs…and yet they are still likely to face another 20 if not 25 years of doing the same kind of work. That is why managing oneself increasingly leads them to begin a second career.” This was becoming a theme for me, so I figured this was my chance to leap out of my comfort zone and reach for something exciting!

I knew, with every bone in my body, I did NOT want to spend the next 20+ years of my professional life generating my income by convincing others to part with theirs. So, now what?

Well, after consulting with my personal board of directors and a whole lot of prayer, I took a look at the digital landscape and knew I wanted to transition into security. The decision was based on learning some key elements of the security space:

  • There is currently 3 million unfilled cybersecurity positions globally. ((ISC)2 Workforce Study)
  • 52% of CISO respondents named “communication & people skills” as a top quality in potential candidates. (Dark Reading)
  • No IT degree required!

Opportunity? Check. Can I talk to people? Double check. No IT degree required? Check. (And, whew!)

Evan Francen of FRSecure is fond of saying, “Get into security! There’s plenty of work to go around.” OK…thanks Evan! Uhhh, how?

“ Luckily, there is an exhaustive amount of resources available in the wild for anyone curious enough to look. ”

Luckily, there is an exhaustive amount of resources available in the wild for anyone curious enough to look. Believe me, I checked out every free resource known to man. But while I was building knowledge, I wondered if it would be enough to get my foot in the door.  My inner sales guru said, “No grasshopper, you need to meet people who can help.” I’d say to anyone at this point — what really makes a difference for someone without the degrees or the experience is your ability to demonstrate passion and enthusiasm for security and a real desire to establish and foster genuine relationships with folks that are already in the security world. My new contacts in security had that passion — and I needed to show I did, too!

With our internal security team I sought out and requested time to chat with anyone who would humor me, peppered them with questions and afterward, made sure to send them each a handwritten ‘thank you’ note.

Second, and probably the most important, I ACTED on their suggestions. The worst thing you can do is ask people for their advice and then completely ignore their recommendations.

By this point I had the bug and I wasn’t going to take no for an answer. I even took my sales skills on a road show. Here’s what I did:

  • I took PTO to attend security conferences and trade shows.
  • I found security happy hours and meetups where I could network with other security professionals.
  • I found no shame in doggedly hounding my CISO to give me a shot.
  • I found opportunities to interact with her and the security team. Even going so far as to show up, front row, to a panel discussion she was speaking on ABOUT the talent shortage in the security field. A bit creepy? Sure. Effective? Well, two months later I was offered a role as an information security analyst.

I’m not saying information security is for everybody, I’m saying information security is for anybody with the drive and passion to self educate, move outside your comfort zone and be brave enough to introduce yourself to perfect strangers! You don’t have to be super technologically savvy (although that certainly helps) or have a masters in computer science, or be some hacker in a basement wearing a black hoodie bent over a keyboard trying to take down “the man.”

Start with taking a look at the industry — do your research, make sure to network with people (security folks are often excited to share their knowledge), be a part of something bigger than yourself and want to be one of the good guys! Teaching people security is easy — it’s having the chops and the drive that’s up to you.

Now, the work begins! Go get ‘em, grasshopper!

Connect with Josh Atkinson on LinkedIn.

Insights From the 2019 Cyberthreat Defense Report Code42 Blog

Insights From the 2019 Cyberthreat Defense Report

This week, I joined Steve Piper, CEO of CyberEdge Group, to review the findings of the 2019 Cyberthreat Defense Report. The Cyberthreat Defense Report is designed to complement Verizon’s annual Data Breach Investigations Report and provides a penetrating look at how IT security professionals perceive cyberthreats and plan to defend against them. This study surveyed 1,200 IT security decision makers and practitioners from 17 countries, six continents and 19 industries.

Among the key findings this year, there are three that are sending a clear signal for the future of information security.

1. Too much security data. This might sound like a negative, but I view it as a good problem to have. After all, if you have all the pertinent data to help you with a security investigation, why wouldn’t you use it? Unfortunately, while the data may exist, the proper tools to decipher and analyze that data doesn’t. This is precisely why 47 percent of respondents acknowledged their organization’s intent to acquire advanced security analytics solutions that incorporate machine learning (ML) technology within the next 12 months.

My take: Having the data is one thing, being able to make quick and visual sense of it is quite another. Quick decision making is paramount and in security, time is emerging as a key factor to mitigating risk.

2. Thirteen percent of overall IT budget is consumed by security. This is up from five percent just two decades ago and will only continue to grow. There also is  a critical shortage of qualified IT security personnel, so I expect continued focus on smart investments in technologies.

My take: Security is rightfully taking center stage from a budget perspective. The challenges around too much security data to analyze, lack of skilled security practitioners and the realization that a cyberattack is imminent are only going to keep trending.

3. Insider threats continue to plague security teams. Detecting insider threats remains an enormous challenge for virtually every IT security organization. Although application development and testing remains atop the list of IT security functions perceived as most challenging, detecting rogue insiders and their insider attacks has risen from third place in 2018 to second place in 2019.

My take: Detecting insider threats comes down to how effective a company is in defining, collecting, correlating, analyzing and reporting on insider indicators of compromise. It’s time to take a proactive approach to protecting data.

“ Detecting insider threats comes down to how effective a company is in defining, collecting, correlating, analyzing and reporting on insider indicators of compromise. It’s time to take a proactive approach to protecting data. ”

Other key takeaways:

  • Hottest security technology for 2019. Advanced security analytics tops 2019’s most wanted list for not just the security management and operations category, but across all technologies in this year’s report.
  • Machine learning (ML) garners confidence. More than 90 percent of IT security organizations have invested in ML and/or artificial intelligence (AI) technologies to combat advanced threats. More than 80 percent are already seeing a difference.
  • Attack success redux. The percentage of organizations affected by a successful cyberattack ticked up slightly this year to 78 percent, despite last year’s first-ever decline.
  • Caving in to ransomware. Organizations affected by successful ransomware attacks increased slightly to 56 percent. More concerning, the percentage of organizations that elected to pay ransoms rose considerably, from 39 percent to 45 percent, potentially fueling even more ransomware attacks in 2019.
  • Container security woes. For the second year, application containers edge mobile devices as IT security’s weakest link.
  • Web application firewalls rule the roost. For the second year, the web application firewall (WAF) claims the top spot as the most widely deployed app/data security technology.
  • Worsening skills shortage. IT security skills shortages continued to rise, with 84 percent of organizations experiencing this problem compared to 81 percent a year ago.
  • Security’s slice of the IT budget pie. On average, IT security consumes 12.5 percent of the overall IT budget. The average security budget is going up by 4.9 percent in 2019.

It’s clear that security teams must ensure their organization’s defenses keep pace with changes both to IT infrastructure and the threats acting against it. The good news, at least for 84 percent of survey respondents, is that their IT security budgets are expected to increase in 2019.

Watch the on-demand webinar or get the full 2019 CyberEdge Cyberthreat Defense Report.

Learnings From Verizon’s Insider Threat Report Code42 Blog

Learnings From Verizon’s Insider Threat Report

What does McKinsey call one of the largest unsolved issues in cybersecurity today? Insider threat. They noted that a staggering half of all breaches between 2012-2017 had an insider threat component. To make consequential strides in combatting insider threat, the topic must be explored further. Thanks to Verizon’s Threat Research Advisory Center, which produced the Verizon Insider Threat Report, we can take an in-depth look at the role insider threat plays in the broader cyber threat landscape.

The Verizon report draws on statistics from their Data Breach Incident Reports and lessons learned from hundreds of investigations conducted by their internal forensics teams. It highlights the ease with which insiders exfiltrate data, while detection on the other hand often takes far longer.

“ Insider threat should no longer be a taboo subject for internal security teams. Denial has not helped – it has only resulted in time-to-discovery being months-to-years for most inside breaches. ”

A trio of Code42’s leading experts on insider threat shared their reactions to the report. Read on to find out their most compelling takeaways.

Jadee Hanson, CISO and VP Information Systems for Code42 called out:

  • The top motivations for insider threats include financial gain (48%), which is not surprising. This is followed second by FUN (23%). It’s deeply concerning to think that a colleague would do something detrimental to their own company… just for fun. 
  • Detecting and mitigating inside threats requires a completely different approach than what we (security teams) are used to when it comes to external threats. Insiders are active employees with active access and sometimes the actions these individuals take look completely normal to a security analyst. 
  • Security awareness and education and overall company culture continue to be a very effective way to mitigate the risks of insider threats. 

  • Data theft incidents are driven mostly by employees with little to no technical aptitude or organizational power. Regular users have access to sensitive and monetizable data and unfortunately too often are the ones behind most internal data breaches.

Code42’s Vijay Ramanathan, SVP Product Management, shared these thoughts: 

  • Insider threat should no longer be a taboo subject for internal security teams. Denial has not helped – it has only resulted in time-to-discovery being months-to-years for most inside breaches. This is a massive blind spot for security teams. Also, this is a problem for all sorts of companies. Not just large ones.

  • The report outlines counter measures that companies should take as part of a comprehensive data security strategy. This is a great starting point. But those measures (outlined on page 7) are nonetheless complex and require skilled staff. This continues to be difficult for many companies, particularly smaller and mid-market organizations, to navigate, especially because of the chronic skills shortage in the security industry. 

  • The “Careless Worker” is called out as one of the harder vectors to protect against. Security teams need to take a proactive, “data hunting” approach to help them understand where data lives and moves, when it leaves the organization, and in what situations data is at risk.

  • Robust data collection and preservation, along with behavior analytics, are models that can help organizations understand where accidental or deliberate data exposure/exfiltration may be occurring. This need is going to become even more stark in the next 12-36 months as companies come to terms with the reality that current data security tools, technologies and practices (eg. policy management, data classification, user blocking, highly-skilled security staff) are not designed for a much more fluid and unpredictable future.

Mark Wojtasiak, VP Portfolio Marketing highlighted: 

  • Nowhere in the report did Verizon say the goal was to prevent insider threats – the focus was all about detection, investigation and response. Verizon even called out DLP as a monitoring tool, likely to the chagrin of legacy DLP providers.
  • The single biggest problem relative to insider threat is detecting them in the first place and the length of time it takes to detect one. I argue that most insider breaches go undetected altogether and the number of insider breaches are actually grossly underreported.
  • Detecting insider threats comes down to how effective a company is in defining, collecting, correlating, analyzing and reporting on insider indicators of compromise. This basically means “machining” a security analyst’s intuition.
  • Creating insider indicators of compromise is difficult because they rely heavily on what is considered “normal” or “abnormal,” which can vary greatly by company, department, job role, individual and the data itself. It’s a lot of work, so why not just use machine learning to do it? 
  • Once an insider breach is detected and the investigation process starts, it can grow very complex quickly. Oftentimes multiple stakeholders are involved and organizations might hire or outsource digital forensic services, which can be expensive. There has to be a faster, simpler process, especially for small to mid-market companies, which can be devastated by insider threats.
  • Insider Threat Programs go way beyond the incident response process (detect – investigate – respond – communicate, etc.). Ongoing vulnerability audits and assessments are needed to fine tune the insider indicators of compromise.
  • I still find it shocking that data classification continues to be a must have – and that employees need to be trained, made aware of and actually take the steps to classify the data they create. Couldn’t it be an indicator of compromise in and of itself if an employee self-classifies data as non-sensitive, then exfiltrates it? 
  • Finally, it is clear that the key to establishing an insider threat program is to start with the data (called “assets” in the report), and then move to people. 

The rise of insider threats is a significant threat to every business and one that is often overlooked. While we all would like to think that employees’ intentions are good, we must prepare for malicious (or accidental) actions taken by those from within our organizations. And because up to 80 percent of a company’s value lies in its intellectual property, insiders are in the position to do serious harm to your business. Is your business prepared to minimize the impact of these data threats?

Tips From the Trenches: Security Needs to Learn to Code Code42 Blog

Tips From the Trenches: Security Needs to Learn to Code

In the old days, security teams and engineering teams were highly siloed: security teams were concerned with things like firewalls, anti-virus and ISO controls, while engineering teams were concerned with writing and debugging code in order to pass it along to another team, like an operations team, to deploy. When they communicated, it was often in the stilted form of audit findings, vulnerabilities and mandatory OWASP Top Ten training classes that left both sides feeling like they were mutually missing the point.

While that may have worked in the past, the speed at which development happens today means that changes are needed on both sides of the equation to improve efficiency and reduce risk. In this blog post, I’ll be talking about why security teams need to learn to code (the flip side of the equation, why engineering teams need to learn security, may be a future blog post).

“ Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. ”

While it’s not uncommon for people to come into security having done code development work in the past, it is not necessarily the most typical career path. Oftentimes, people come into the security realm without any coding experience other than perhaps a Java or Python course they took at school or online. Because security encompasses so many different activities, there would appear to be no downside if security folks outside of a few highly specialized roles, like penetration testing, didn’t have coding experience. However, I’m here to tell you that coding can be beneficial to any security professional, no matter the role.

Let’s start with automation. No matter what you are doing in security, odds are that you have some kind of repeatable process, such as collecting data, doing analysis, or performing some action, that you can automate. Fortunately, more and more applications have APIs available to take advantage of, and are therefore candidates for writing code to do the work so you don’t have to.

At this point, you may think that this sounds a lot like a job for a Security Orchestration Automation and Response (SOAR) tool. A SOAR tool can absolutely be used to automate activities, but already having a SOAR tool is certainly not a requirement. A simple script that ties together a couple of applications via an API to ingest, transform and save data elsewhere may be all you need in order to start getting value out of coding. Plus, this can be a great way to determine how much value you may be able to get out of a full-blown SOAR tool.

Learning to code won’t just help your own efficiency. Writing your own code can help make all of those OWASP Top Ten vulnerabilities much more concrete, which can lead to better security requirements when collaborating with engineers. Simply being comfortable with one or two languages can allow you to do code reviews and provide another pair of eyes to your engineers as well. It’s also incredibly valuable to be able to give engineers concrete solutions when they ask about how to remediate a particular vulnerability in code.

Here at Code42, our security team believes strongly in the value of learning to code. That’s why we’ve set a goal for our entire security team, no matter the role, to learn how to code and to automate at least one repetitive activity with code in 2019. By doing this, we will make our overall security team stronger, work more efficiently and provide more valuable information to our engineering teams.

Happy coding!

Connect with Nathan Hunstad on LinkedIn.

Tips From the Trenches: Providing Value Through Business Continuity Code42 Blog

Tips From the Trenches: Providing Value Through Business Continuity

No matter what we do in our jobs, we all want to provide value back to the organizations where we work. With some jobs, tangible evidence of value is very apparent, such as hitting your sales quota or building code for a feature in your software. In business continuity, that can be a bit of a challenge. To start, most people don’t understand what it is, or what responsibilities are tied to it. If someone asks me what I do, and my response is: “business continuity,” the conversation usually goes a different direction shortly thereafter. This makes it a challenge from the get-go in showing value to your company.

“ If ensuring value to the company is at the center of your decisions, it will go a long way in leading to a successful business continuity program. ”

Here are a few key principles I have learned in my business continuity journey, that have helped me show value within my organization:

Leadership buy-in

Real simple, your business continuity program has to have this in order to succeed. If you think you’re fully prepared to respond and recover from a disaster without buy-in from leadership, you’re kidding yourself. Leadership needs to understand what you’re doing, why you’re doing it and how it will benefit their department and the company as a whole. This will give you top-level support and make your job easier. Having guidance from above will ensure your requests for resources for the purposes of a business impact analysis and recovery testing will be given.

No doubt getting leaderships attention can be a challenge, but it has to happen. I have been a part of organizations that didn’t have it, and the result was a program that could never meet its full potential because our requests for time and effort from other departments were never a priority.

At Code42, we worked with each member of our executive leadership team to outline what we were doing, why we’re doing it and what assistance we would need from their department. Department leaders were then able to give direction on who they wanted us to work with and set the whole program in motion.

Narrow the scope of your program

On the surface this seems counterintuitive. Why not cover every function and supporting activity? The reasoning is that most companies don’t have a dedicated team of employees focused on business continuity. For some, business continuity is simply one of many responsibilities they hold. Along with manpower, the further you head into supporting functions and away from what’s really critical, the lower the rate of return for the company. The key is to focus on what’s critical. I have experienced it firsthand, where my drive to make sure all business functions were documented and prepared for. It had me spending countless hours covering the full spectrum of the business. By the time I was finished, the data was already out of date and amounted to poor use of resources with little to no value for the company.

When we worked with each member of the executive leadership team at Code42, we kept our scope to the top two critical functions that each department performs. This helped our program avoid the minutiae and focus squarely on what’s critical for supporting our product, our customers and our employees.

Make the information accessible

The information for your business continuity program should not be sequestered away from your employees, it should be easy to view and update. This is a rather obvious statement, but one that I have seen many companies struggle with. Here at Code42, we made a misstep by thinking the solution to our business continuity challenges lie within a continuity software provider. The intent was for it to help us manage all of our data, produce plans and be a one-stop shop for all things business continuity. Not long after onboarding, challenges started to emerge. The biggest challenge, was the information was not accessible to the workforce. The other was that it didn’t tie in to any software already in use at Code42. It was on an island, and of little to no value to the business. A pivot was needed, and thankfully we didn’t have to go far for an answer.

The answer came from taking a step back and determining what tools employees use across the company on a day-to-day basis. For us, the answer laid within Confluence, which serves as our internal wiki. This is where we build out department focused pages and their respective critical functions, and dependencies. Connecting to Confluence allowed us to tie in another company-wide application, JIRA, for tickets related to vendor assessments and risk and incident tickets. Our focus throughout the process was to ensure value was being passed on to Code42 and its employees, and the key piece to that was having information easily accessible.

Business continuity has a number of inherent challenges, but if ensuring value to the company is at the center of your decisions it will go a long way in leading to a successful program. I hope these principles I laid out help you provide better value to your own company.

Connect with Loren Sadlack on LinkedIn.

Code42 Talks DLP with Dark Reading

After unveiling our Next-Gen Data Loss Protection solution at the RSA Conference 2019 in San Francisco, just about every visitor to the Code42 booth asked: How is data loss protection different than data loss prevention?

To answer this question, I sat down with Dark Reading’s Terry Sweeney for a video interview. You’ll find the highlights of our conversation in a short video below — and you can watch the full interview at Dark Reading.

The home security analogy

I like to start with a simple analogy everyone can identify with: Let’s say a would-be burglar comes to your door while you’re at work. In theory, you can rest assured that the person will not break into your house — because you have locks on your doors, right? But we all know locks aren’t failsafe, so what if this individual does find a way in? You won’t know about any of this until you get home — hours later — or until you realize something is missing, perhaps days later. By then, it’s much harder to figure out what all was taken, who took it and when it was taken. That’s the problem with the traditional data loss prevention model: it’s focused on prevention — but if that fails, you’re not left with much.

Now, imagine you have Nest cams inside and outside your house. Your front-door Nest cam notifies you immediately, via smartphone, to activity at your front door. With real-time visibility, if you don’t recognize the face of the visitor and/or are concerned with the actions he takes next (e.g., picking the lock, breaking a window, etc.), you can take action right now. Even if you discover something missing later in the day, you have video logs that will help you figure out when that article was taken and how. Just like the Nest cams, Code42 Next-Gen Data Loss Protection shows you exactly what’s happening, when it’s happening — so you can decide if it’s important and take action now.

Paradigm shift: all data matters

Another major difference in approach between legacy data loss prevention and Code42 Next-Gen Data Loss Protection: how the tools define the value of data. Traditional DLP tools require an organization to decide which data and files are valuable or sensitive — and then figure out how to configure it with rules and policies. But today’s knowledge workers are constantly creating data — and it all matters. From developing new software, to innovating manufacturing processes or providing consulting services, more and more businesses across every sector are ultimately in the business of making new ideas. For these “progressive makers,” as we call them at Code42, every file and every piece of data holds value in the chain of idea creation. And the value of any given piece of data can skyrocket in an instant — when a project turns from theoretical tinkering into tangible innovation. Finally, while traditional forms of protected data like PCI, PII, HIPAA tend to follow predictable formats and patterns that can be recognized through rules, all of this “idea data” is wrapped up in largely unstructured data. The data relating to a software product launch, for example, might span from source code files, to Word documents containing marketing plans, to Excel spreadsheets with revenue forecasts and production budgets, to CRM data on target prospects. There’s no way to create a blanket “rule” for defining the structure or pattern of data relating to a valuable product launch. 

“ In this new reality of endpoints and cloud where all data matters, Code42 offers an unmatched core capability: We’ve gotten really good at collecting and saving every file, from every user, on every device. ”

In this new reality of endpoints and cloud where all data matters, Code42 offers an unmatched core capability: We’ve gotten really good at collecting and saving every file, from every user, on every device. More importantly, we’ve gotten really good at doing it in near-real time, doing it cost-effectively and doing it without inhibiting users as they’re working. This means organizations no longer have to define, at the outset, what data matters. And this complete data collection unlocks the kind of immediate, comprehensive visibility that creates the foundation of data loss protection — and sets it apart from data loss prevention.

Two critical questions DLP buyers need to ask

One of my favorite questions from Terry Sweeney was, “What should a DLP buyer look for as they’re evaluating a solution?” My answer is simple:

  1. How soon does the tool show you that something is going wrong?
  2. How soon does the tool let you take action?

The most consistent and concerning finding from annual infosecurity reports like Verizon’s Data Breach Investigation Report and the Ponemon Institute’s Cost of Data Breach Study is that most organizations aren’t discovering incidents for weeks — or months. In fact, the Ponemon Institute’s 2018 research showed the average breach took 197 days for an organization to discover. That’s six months before the investigation even begins— and even longer until the organization can attempt to take some remedial action. That’s a lot of time for data to be lost, tracks to get covered and stolen IP to do damage to a business.

Code42 Next-Gen Data Loss Protection cuts that time-to-awareness from months to minutes. Take the common example of a departing employee: You’ll know if they’ve taken data before they even leave the building — not months later when a rival launches a competing product. Moreover, you’re getting immediate and full visibility around the context of the departing employee’s data removal — you can look at the exact file(s) and see if it’s valuable and/or sensitive — so you can make decisions and take action quickly and confidently.

Enabling infosec automation

My discussion with Terry ended with a look at perhaps the most important factor driving infosecurity forward: the expanding role of automation in helping organizations manage and protect ever-increasing volumes of data. Many organizations fight expanding data security threats with a small handful of infosecurity staff — half who are “on loan” from IT. Automation and orchestration platforms pull together and make sense of all the alerts, reports and other data from various infosecurity tools — fighting false positives and alert fatigue, and allowing them to see more and do more, with fewer human eyes. But these platforms are only as good as the inputs they’re fed. These platforms rely on comprehensive data feeds to ensure you can create the customized reports and alerts you need to reliably bolster your security automation. The complete security insights gathered by Code42 Next-Gen Data Loss Protection ensure there are no blind spots in that strategyThat’s why we’re focused on making sure all our tools plug into automation and orchestration platforms, and support the workflow automation capabilities you already have in place. All Code42 tools are available through APIs. If you want us to integrate data and alerts to be automatically provisioned in your SIEM or orchestration tool, we can do that. If you want us to automatically raise an email alert to your ticketing system, we can do that, too. Furthermore, Code42’s Next-Gen DLP allows you to take a more proactive “data-hunting” approach to data security, much like you would with threat hunting to deal with external malware and attacks.

This is where the value of Code42 Next-Gen Data Loss Protection gets really exciting. Our tool gives you incredible off-the-shelf value; it does things no other tool can. We’re seeing organizations integrating our tool with advanced automation and orchestration platforms — using our tool in ways we hadn’t even considered — and really amplifying the value and driving up their return on investment.

Watch the video highlights of the Dark Reading interview here or you can watch the full interview at Dark Reading.

Successful Software Security Training Lessons Learned Code42 Blog

Successful Software Security Training Lessons Learned

How enterprises build, deploy and manage software apps and applications has been turned upside down in recent years. So too have long-held notions of who is a developer. Today, virtually anyone in an organization can become a developer—from traditional developers to quality assurance personnel to site reliability engineers. 

Moreover, this trend includes an increasing number of traditional business workers, thanks to new low-code and so-called no-code development platforms. These platforms are making it possible for these non-traditional developers, sometimes called citizen developers, to build more apps the enterprise needs. Of course, whenever an organization has someone new developing code, it creates a situation that could potentially create new security, privacy and regulatory compliance risks. 

“ Recently, at Code42, we trained our entire team, including anyone who works with customer data, to ensure everyone was using best practices to secure our production code and environments. ”

For most organizations, this means they must reconsider how they conduct security awareness and application security testing. Recently, at Code42, we trained our entire team, including anyone who works with customer data. This comprised of the research and development team, quality assurance, cloud operations, side reliability engineers, product developers and others to ensure everyone was using best practices to secure our production code and environments. 

We knew we needed to be innovative with this training. We couldn’t take everyone and put them in a formal classroom environment for 40 hours. This isn’t the best format for many technologists to learn.

Instead, we selected a capture the flag (CTF) event. We organized into teams who would be presented with a number of puzzles designed to demonstrate common vulnerability mistakes, such as those in the OWASP Top 10. We wanted to create an engaging hands-on event where everyone could learn new concepts around authentication, encryption management and other practices. 

We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. Watching the teams work through the CTF was fascinating because you could see their decision-making processes when it came to remediating the issues presented. For problems where a team wasn’t sure of the solution, we provided support training materials, including videos.

“ We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. ”

While the event was a success overall, we certainly learned quite a bit that will create a better experience for everyone in our next training. 

Let me say, the CTF style was exceptional. The approach enabled individuals to choose areas they needed to learn, and the instructional videos were well received by those who used them. But I’ll tell you, not everyone was happy. About three-quarters of my team loved it, and then the other quarter wanted to grab pitchforks and spears and chase me down.

First, throughout the contest, a lack of a common development language proved to be a challenge. Within most of the teams, the engineers chose the problems that were in a language with which they were familiar. It often cut the quality assurance or site reliability engineers out from helping on that problem. No good.

Gamification, while well intended, caused problems.  As I mentioned, we had instructional videos. That way, if a team didn’t know the answer to a problem, they could watch the videos, get guidance, and learn in the process. This added a time element to the project, which actually caused individuals to skip the videos.

How we implemented the leaderboards proved counterproductive. Remember how we all (well, many of us) feared being the last person picked in gym class growing up?  Well, leaderboards shouldn’t be present until the game ends, and even then to summarize only the top finishers. No one likes to know they were in the bottom 10 percent, and it doesn’t help the learning process.

Dispel the fear. These are training and awareness classes. While official and credited security training is often a pass/fail outcome, this awareness training is for education. However, our employees feared their performance would somehow be viewed as bad and could affect their performance reviews—or employment. Face these rumors up front and make it clear the CTF results aren’t tied to work performance.  

Overall, our team did learn valuable lessons using our CTF format — the innovative approach we took to educate them was successful in that way. But next time I hold a contest, we will definitely incorporate changes from the lessons above. And I’ll work harder to strike the balance between formal lecture and class setting versus competitive event when there are developers that present with varying experience and skillsets.

  

Security Pitfalls of Shared Public Links Code42 Blog

Security Pitfalls of Shared Public Links

Imagine terabytes of corporate data exposed in the wild by employees sharing publicly available links on the cloud. Sound far fetched? It’s not. According to a recent article from SiliconANGLE, that’s exactly what happened when security researchers uncovered terabytes of data from over 90 companies exposed by employees sharing publicly available links to Box Inc.’s cloud storage platform. And while it’s easy to think that this problem is restricted to Box, it is in fact a problem most cloud services like Dropbox or OneDrive for Business need to address.

“ Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. ”

Cloud security is failing every day due to public file share links – content that users deliberately or accidentally expose to outsiders or to unapproved users within the company. This presents significant gaps in cloud security and compliance strategies and raises important questions such as:

  • What data is going to an employee’s personal cloud?
  • Who’s making a link public instead of sharing it with specific people?
  • Are departments or teams using other/non-sanctioned clouds to get their work done?
  • Are contractors getting more visibility than they should in these clouds?

Compounding the problem, the remedy that most cloud services provide to administrators is to “configure shared link default access” to users. Administrators can configure shared link access so accidental or malicious links can’t be created in the first place, however, there is a clear loss of productivity when users who need the continued collaboration and ability to share are mistakenly denied. This is where IT/security teams need to strike the fine balance between protecting corporate IP and enabling user productivity.

Code42’s approach to DLP doesn’t block users or shut down sharing, giving organizations visibility while there is a free flow of information between partners, customers and users in general. While understanding that a link has gone public in the first place, security protocols should further include:

  • Identifying files that are going to personal clouds
  • Understanding who’s sharing links publicly and why
  • Mitigating instances of non-sanctioned clouds
  • Gaining visibility into cloud privileges extended to contractors or other third parties
Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance Code42 Blog

Tips From the Trenches: Cloud Custodian–Automating AWS Security, Cost and Compliance

“We’re moving to the cloud.” If you haven’t heard this already, it’s likely you will soon. Moving to the public cloud poses many challenges upfront for businesses today. Primary problems that come to the forefront are security, cost and compliance. Where do businesses even start? How many tools do they need to purchase to fulfill these needs?

After deciding to jump start our own cloud journey, we spun up our first account in AWS and it was immediately apparent that traditional security controls weren’t going to necessarily adapt. Trying to lift and shift firewalls, threat vulnerability management solutions, etc. ran into a multitude of issues including but not limited to networking, AWS IAM roles and permissions and tool integrations. It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed.

“ It was clear that tools built for on-premise deployments were no longer cost or technologically effective in AWS and a new solution was needed. ”

To remedy these discoveries, we decided to move to a multi-account strategy and automate our resource controls to support increasing consumption and account growth. Our answer to this was Capital One’s Cloud Custodian open source tool because it helps us manage our AWS environments by ensuring the following business needs are met:

  • Compliance with security policies
  • AWS tagging requirements
  • Identifying unused resources for removal/review
  • Off-hours are enforced to maximize cost reduction
  • Encryption needs are enforced
  • AWS Security Groups are not over permissive
  • And many more…

After identifying a tool that could automate our required controls in multiple accounts, it was time to implement the tool. The rest of this blog will focus on how Cloud Custodian works, how Code42 uses the tool, what kind of policies (with examples) Code42 implemented and resources to help one get started in implementing Cloud Custodian into their own environment.

How Code42 uses Cloud Custodian

Cloud Custodian is an open source tool created by Capital One. You can use it to automatically manage and monitor public cloud resources as defined by user written policies. Cloud Custodian works in AWS, Google Cloud Platform and Azure. We, of course, use it in AWS.

As a flexible “rules engine,” Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. These policies are written in a simple YAML configuration file that specifies a resource type, resource filters and actions to be taken on specified targets. Once a policy is written, Cloud Custodian can interpret the policy file and deploy it as a Lambda function into an AWS account. Each policy gets its own Lambda function that enforces the user-defined rules on a user-defined cadence. At the time of this writing, Cloud Custodian supports 109 resources, 524 unique actions and 376 unique filters.

As opposed to writing and combining multiple custom scripts that make AWS API calls, retrieving responses, and then executing further actions from the results, the Cloud Custodian simply interprets an easy-to-write policy that then takes into consideration the resources, filters and actions and translates them into the appropriate AWS API calls. These simplifications make this type of work easy and achievable for even non-developers.

“ As a flexible rules engine, Cloud Custodian allowed us to define rules and remediation efforts into one policy. Cloud Custodian utilizes policies to target cloud resources with specified actions on a scheduled cadence. ”

Now that we understand the basic concepts of Cloud Custodian, let’s cover the general implementation. Cloud Custodian policies are written and validated locally. These policies are then deployed by either running Cloud Custodian locally and authenticating to AWS or in our case via CI/CD pipelines. At Code42, we deploy a baseline set of policies to every AWS account as part of the bootstrapping process and then add/remove policies as needed for specific environments. In addition to account specific policies, there are scenarios where a team may need an exemption, as such, we typically allow an “opt-out” tag for some policies. Code42 has policy violations report to a Slack channel via webhook created for each AWS account. In addition, we also distribute the resources.json logs directly into a SIEM for more robust handling/alerting.

Broadly speaking, Code42 has categorized policies into two types – (i) notify only and (ii) action and notify. Notify policies are more hygiene-related and include policies like tag compliance checks, multi-factor authentication checks and more. Action and notify policies are policies that take actions after meeting certain conditions, unless tagged for exemptions. Action and notify policies include policies like s3-global-grants, ec2-off-hours-enforcement and more.  The output from the custodian policies are also ingested into a SIEM solution to provide more robust visualization and alerting. This allows the individual account owners to review policy violations and perform the assign remediation actions to their teams. For Code42, these dashboards provide both the security team and account owners the overall health of our security controls and account hygiene. Examples of Code42 policies may be found at GitHub.

What policies did we implement?

There are three primary policy types Code42 deployed; cost-savings, hygiene and security. Since policies can take actions on resources, we learned that it is imperative that the team implementing the policies must collaborate closely with any teams affected by said policies in order to ensure all stakeholders know how to find and react to alerts and can provide proper feedback and adjustments when necessary. Good collaboration with your stakeholders will ultimately drive the level of success you achieve with this tool. Let’s hit on a few specific policies.

Cost Savings Policy – ec2-off-hours-enforcement

EC2 instances are one of AWS’s most commonly used services. EC2 allows a user to deploy cloud compute resources on-demand as necessary, however there are many cases where the compute gets left “on” even when it’s not used, which racks up costs. With Cloud Custodian we’ve allowed teams to define “off-hours” for their compute resources. For example, if I have a machine that only needs to be online 2 hours a day, I can automate the start and stop of that instance on a schedule. This saves 22 hours of compute time per day. As AWS usage increases and expands, these cost savings add up exponentially.

Hygiene Policy – ec2-tag-enforcement

AWS resource tagging is highly recommended in any environment. Tagging allows you to define multiple keys with values on resources that can be used for sorting, tracking, accountability, etc. At Code42, we require a pre-defined set of tags on every resource that supports tagging in every account. Manually enforcing this would be nearly impossible. As such, we utilized a custodian policy to enforce our tagging requirements across the board. This policy performs a series of actions as actions described below.

  1. The policy applies filters to look for all EC2 resources missing the required tags.
  2. When a violation is found, the policy adds a new tag to the resource “marking” it as a violation.
  3. The policy notifies account owners of the violation and that the violating instance will be stopped and terminated after a set time if it is not fixed.

If Cloud Custodian finds tags have been added within 24 hours, it will remove the tag “violation.” If the proper tags are not added after, the policy continues to notify account owners that their instance will be terminated. If not fixed within the specified time period, the instance will terminate and a final notification is sent.

This policy ultimately ensures we have tags that distinguish things like a resource “owner.” An owner tag allows us to identify which team owns a resource and where the deployment code for that resource might exist. With this information, we can drastically reduce investigation/remediation times for misconfigurations or for troubleshooting live issues.

Security Policy – S3-delete-unencrypted-on-creation

At Code42, we require that all S3 buckets have either KMS or AES-256 encryption enabled. It is important to remember that we have an “opt-out” capability built into these policies so they can be bypassed when necessary and after approval. The bypass is done via a tag that is easy for us to search for and review to ensure bucket scope and drift are managed appropriately.

This policy is relatively straightforward. If the policy sees a “CreateBucket” Cloudtrail event, it checks the bucket for encryption. If no encryption is enabled and an appropriate bypass tag is not found, then the policy will delete the bucket immediately and notify the account owners. It’s likely by this point you’ve heard of a data leak due to a misconfigured S3 bucket.  It can be nearly impossible to manually manage a large scale S3 deployment or buckets created by shadow IT. This policy helps account owners learn good security hygiene, and at the same time it ensures our security controls are met automatically without having to search through accounts and buckets by hand. Ultimately, this helps verify that S3 misconfigurations don’t lead to unexpected data leaks.

Just starting out?

Hopefully this blog helped highlight the power of Capital One’s Cloud Custodian and its automation capabilities. The Cloud Custodian policies can be easily learned and written by non-developers, and provides needed security capabilities. Check out the links in the “Resources” section below regarding Capital One’s documentation, as well as examples of some of Code42’s baseline policies that get deployed into every AWS account during our bootstrap process. Note: these policies should be tuned accordingly to your business and environment needs and not all will be applicable to you.

Resources:

Authors:

Aakif Shaikh, CISSP, CISA, CEH, CHFI is a senior security analyst at Code42. His responsibilities include cloud security, security consulting, penetration testing and inside threat management. Aakif brings 12+ years of experience into a wide variety of technical domains within information security including information assurance, compliance and risk management. Connect with Aakif Shaikh on LinkedIn.

Byron Enos Code42

Byron Enos is a senior security engineer at Code42, focused on cloud security and DevSecOps. Byron has spent the last four years helping develop secure solutions for multiple public and private clouds. Connect with Byron Enos on LinkedIn.

Code42 Jim Razmus

Jim Razmus II is director of cloud architecture at Code42. He tames complexity, seeks simplicity and designs elegantly. Connect with Jim Razmus II on LinkedIn.

Tips From the Trenches: Automating Change Management for DevOps

One of the core beliefs of our security team at Code42 is SIMPLICITY. All too often, we make security too complex, often because there are no easy answers or the answers are very nuanced. But complexity also makes it really easy for users to find work-arounds or ignore good practices altogether. So, we champion simplicity whenever possible and make it a basic premise of all the security programs we build.

“ At Code42, we champion simplicity whenever possible and make it a basic premise of all the security programs we build. ”

Change management is a great example of this. Most people hear change management and groan. At Code42, we’ve made great efforts to build a program that is nimble, flexible and effective. The tenants we’ve defined that drive our program are to:

  • PREVENT issues (collusion, duplicate changes)
  • CONFIRM changes are authorized changes
  • DETECT issues (customer support, incident investigation)
  • COMPLY with regulatory requirements

Notice compliance is there, but last on the list. While we do not negate the importance of compliance in the conversations around change management or any other security program, we avoid at all costs using the justification of “because compliance” for anything we do.

Based on these tenants, we focus our efforts on high impact changes that have the potential to impact our customers (both external and internal). We set risk-based maintenance windows that balance potential customer impact with the need to move efficiently.

We gather with representatives from both the departments making changes (think IT, operations, R&D, security) and those impacted by changes (support, sales, IX, UX) at our weekly Change Advisory Board meeting–one of the best attended and most efficient meetings of the week–to review, discuss and make sure teams are appropriately informed of what changes are happening and how they might be impacted.

This approach has been working really well. Well enough, in fact, for our Research Development & Operations (RDO) team to embrace DevOps in earnest.

New products and services were being deployed through automated pipelines instead of through our traditional release schedule. Instead of bundling lots of small changes into a product release, developers were now looking to create, test and deploy features individually–and autonomously. This was awesome! But also, our change management program–even in its simplicity–was not going to cut it.

“ We needed to not make change control a blocker in an otherwise automated process. We looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. ”

So with the four tenants we used to build our main program, we set off to evolve change management for our automated deployments. Thankfully, because all the impacted teams have seen the value of our change management program to-date, they were on board and instrumental in evolving the program.

But an additional tenant had to be considered for the pipeline changes. We needed to not make change control a blocker in an otherwise automated process. So we looked at our current pipeline tooling to manage approvers and created integrations with our ticketing system to automatically create tickets to give us visibility to the work being done. We defined levels of risk tied to the deployments and set approvers and release windows based on risk. This serves as both a control to minimize potential impact to customers but also as a challenge to developers to push code that is as resilient and low impact as possible so they can deploy at will.

We still have work to do. Today we are tracking when changes are deployed manually. In our near future state our pipeline tooling will serve as a gate and hold higher risk deployments to be released in maintenance windows. Additionally, we want to focus on risk, so we are building in commit hooks with required approvers based on risk rating. And, again, because we worked closely with the impacted teams to build a program that fit their goals (and because our existing program had proven its value to the entire organization), the new process is working well.

Most importantly, evolving our change process for our automated workflows allows us to continue to best serve our customers by iterating faster and getting features and fixes to the market faster.

Connect with Michelle Killian on LinkedIn.