Architecting-IAM-for-AWS-with-Okta-Code42_Blog

Tips From the Trenches: Architecting IAM for AWS with Okta

In the last year, Code42 made the decision to more fully embrace a cloud-based strategy for our operations. We found that working with a cloud services provider opens up a world of possibilities that we could leverage to grow and enhance our product offerings. This is the story of how we implemented identity and access management (IAM) in our Amazon Web Services (AWS) environment.

IAM guiding principles

Once the decision was made to move forward with AWS, our developers were hungry to start testing the newly available services. Before they could start, we needed two things: an AWS account and a mechanism for them to log in. Standing up an account was easy. Implementing an IAM solution that met all of our security needs was more challenging. We were given the directive to architect and implement a solution that met the requirements of our developers, operations team and security team.

We started by agreeing on three guiding principles as we thought through our options:

1.) Production cloud access/administration credentials need to be separate from day-to-day user credentials. This was a requirement from our security team that aligns with existing production access patterns. Leveraging a separate user account (including two-factor authentication) for production access decreases the likelihood of the account being phished or hijacked. This secondary user account wouldn’t be used to access websites, email or for any other day-to-day activity. This wouldn’t make credential harvesting impossible, but it would reduce the likelihood of an attacker easily gaining production access by targeting a user. The attacker would need to adopt advanced compromise and recon methods, which would provide our security analysts extra time to detect the attack.

2.) There will be no local AWS users besides the enforced root user, who will have two-factor authentication; all users will come through Okta. Local AWS users have some significant limitations and become unwieldy as a company grows beyond a few small accounts. We were expecting to have dozens, if not hundreds, of unique accounts.  This could lead to our developers having a unique user in each of the AWS environments. These user accounts would each have their own password and two-factor authentication. In addition to a poor end-user experience, identity lifecycle management would become a daunting and manual task. Imagine logging into more than 100 AWS environments to check if a departing team member has an account. Even if we automated the process, it would still be a major headache.

Our other option was to provide developers with one local AWS user with permissions to assume role in the different accounts. This would be difficult to manage in its own way as we tried to map which users could connect to which accounts and with which permissions. Instead of being a lifecycle challenge for the IAM team, it would become a permissioning and access challenge.

Fortunately for us, Code42 has fully embraced Okta as our cloud identity broker. Employees are comfortable using the Okta portal and all users are required to enroll in two-factor authentication. We leverage Active Directory (AD) as a source of truth for Okta, which helps simplify user and permission management. By connecting Okta to each of our AWS accounts, users can leverage the same AD credentials across all AWS accounts — and we don’t need to make any changes to the existing IAM lifecycle process. Permissioning is still a challenge, but it can be managed centrally with our existing IAM systems. I will describe in greater detail exactly how we achieved this later in the post.

3.) Developers will have the flexibility to create their own service roles, but will be required to apply a “deny” policy, which limits access to key resources (CloudTrail, IAM, security, etc.).  As we were creating these principles, it became clear that the IAM team would not have the bandwidth to be the gatekeepers of all roles and policies (how access is granted in AWS). Developers would need to be empowered to create their own service roles, while we maintained control over the user access roles via Okta. Letting go of this oversight was very difficult. If not properly managed, it could have opened us up to the risk of malicious, over-permissioned or accidental modification of key security services.

Our initial solution to this problem was to create a “deny” policy that would prevent services and users from interacting with some key security services. For example, there should never be a need within an application or microservice to create a new IAM user or a new SAML provider. We notified all users that this deny policy must be attached to all roles created and we used an external system to report any roles that didn’t have this deny policy attached.

Recently, AWS released a new IAM feature called permission boundaries. The intent of permission boundaries is similar to that of our deny policy.  By using permission boundaries we can control the maximum permissions users can grant to the IAM roles that they create.  We are planning to roll this out in lieu of the deny policy in the very near future.

Example of a role found without the deny policy attached

Implementing Okta and AWS

When thinking through connecting Okta and AWS, we were presented with two very different architectural designs: hub and spoke and direct connect. The hub and spokedesign leverages an AWS landing account that is connected to Okta. Once logged in to this account, users can switch roles into other AWS accounts that are authorized. The direct connect design we implemented creates a new Okta application icon for each AWS account. Users access their accounts by visiting their Okta homepage and selecting the account they want to use.

Power users tend to prefer the hub and spoke model, as this allows them to quickly jump from account to account without re-logging in or grabbing a new API token. The more casual users prefer to have all accounts presented on one page. They aren’t swapping among accounts, and it isn’t fair to ask them to memorize account numbers (or even exact short names) so they can execute an assume role command. In addition to user experience, we considered how easy it was to automate management once a new account has been created. The two approaches each have merit, so we decided to implement both.

When a new account is created, it is bootstrapped to leverage the hub and spoke landing account. Automation can immediately start working with the account, and certain power users get the access they need without any IAM intervention. The IAM team can revisit the account when convenient and stand up the direct connection to Okta. New Okta features, currently in beta, will improve this direct connect process.

One final thing I would like to touch on is how we leverage the Okta launcher to get API tokens in AWS. One of the benefits of having local users in AWS is that each user is given their own API key. While this is a benefit to end users, these keys are very rarely rotated and could present a significant security risk (such as an accidental public GitHub upload). To address this, Okta has created a java applet that generates a temporary AWS API key. The repo can be found here. Like many other companies, we have created wrappers for this script to make things as easy as possible for our end users. After cloning the repo, a user can type the command “okta -e $ENV_NAME” and the script will reach out to Okta and generate an API key for that specific AWS account.  The users do need to know the exact environment name for this script to work, but most power users that need API access will have this information.

No matter where your company is on the path to leveraging a cloud service provider, IAM is a foundational component that needs to be in place for a successful and secure journey. If possible, try to leverage your existing technologies to help improve user experience and adoption. I hope the principles we shared here help you think through your own requirements.

Code42-Tips-from-the-Trenches-Searching-Files-in-the-Cloud

Tips From the Trenches: Searching Files in the Cloud

In a few of my previous blogs, I shared some examples of ways the Code42 security team uses Code42 Forensic File Search to find interesting files — macro-enabled Microsoft Office files, known malicious MD5 hashes and so on. Now that the search capabilities of our newest product have been extended beyond endpoints to include cloud services, such as Google Drive and Microsoft OneDrive, I’d like to look at how we’re using this broadened visibility in our investigations.

“ Because we can now use Code42 Forensic File Search to search for files and file activity across both endpoints and Google Drive, we can be more certain of the locations of sensitive files when we are doing file movement investigations. ”

Finding files – and tracking file movement – in the cloud

Code42 uses Google Drive as a cloud collaboration platform. Because we can now use Code42 Forensic File Search to search for files and file activity across both endpoints and Google Drive, we can be more certain of the locations of sensitive files when we are doing file movement investigations. We combine Code42 Forensic File Search with the Code42 File Exfiltration Detection solution to execute an advanced search — using a given MD5 hash — to find files that have been moved to a USB drive. This allows us to quickly build a complete picture of where a file exists in our environment — and how it may have moved from someone’s laptop to the cloud and back.

What files are shared externally?

Using the latest version of Code42 Forensic File Search, we can also search files based on their sharing status. For example, in a matter of a few seconds, we can search for all Google Drive documents that are shared with non-Code42 users. This shows us all documents that have been intentionally or inadvertently shared outside of the company. A deeper look at this list helps us identify any information that has been shared inappropriately. As with all searches within Code42 Forensic File Search, these investigations take only a few seconds to complete.

Here’s a hypothetical example: Let’s say the organization was pursuing an M&A opportunity and we wanted to make sure that confidential evaluation documents weren’t being shared improperly. We could use Code42 Forensic File Search to pull up a list of all documents shared externally. Should that list contain one of the confidential M&A evaluation documents, we could look more closely to determine if any inappropriate sharing occurred.

Continually finding new use cases

Code42’s ffs-tools repository on GitHub now includes several new searches that take advantage of our new cloud capabilities. You can find them all here.

Like most organizations, we use many cloud services to perform our day-to-day work. That’s why in the near future, we plan to expand the search capabilities of Code42 Forensic File Search across even more cloud services — giving you even greater visibility into the ideas and data your organization creates, no matter where they live and move.

Happy threat hunting!

Tips from the Trenches: Multi-Tier Logging

Tips From the Trenches: Multi-Tier Logging

Here’s a stat to make your head spin: Gartner says that a medium-sized enterprise creates 20,000 messages of operational data in activity logs every second. That adds up to 500 million messages — more than 150 GB of data — every day. In other words, as security professionals, we all have logs. A lot of logs. So, how do we know if our log collection strategy is effectively meeting our logging requirements? Unfortunately, a one-size-fits-all logging solution doesn’t exist, so many leading security teams have adopted a multi-tier logging approach. There are three steps to implementing a multi-tier logging strategy:

“ A one-size-fits-all logging solution doesn’t exist, so many leading security teams have adopted a multi-tier logging approach. ”

1. Analyze your logging requirements

A multi-tier logging strategy starts with analyzing your logging requirements. Here’s a simple checklist that I’ve used for this:

Who requires access to the organization’s logs?

  • Which teams require access?
  • Is there unnecessary duplication of logs?
  • Can we consolidate logs and logging budgets across departments?

What logging solutions do we currently have in place?

  • What is the current health of our logging systems?
  • Are we receiving all required logs?
  • Have we included all required log source types?
    • Do we need public cloud, private cloud, hybrid cloud and/or SaaS logs?
  • How many events per second (EPS) are we receiving?
  • How much log storage (in gigabytes) are we using now?
  • What are our logs of interest?
    • Create alerts and/or reports to monitor for each.

What time zone strategy will you use for logging? 

  • How many locations are in different time zones across the organization?
  • Will you use a single time zone or multiple time zone logging strategy?

How much storage capacity will be needed for logging for the next 3-5 years?

Do we have a log baseline in place?

  • Where are our logs stored now?
  • Where should they be stored in the future?

Are we collecting logs for troubleshooting, security analysis and/or compliance?

  • What are our compliance requirements?
    • Do we have log storage redundancy requirements?
    • What are our log retention requirements?
    • Do we have log retention requirements defined in official policy?
  • What logs do we really need to keep?
    • Identify those that are useful.
    • Drop those that are not.

2. Digest log information

After all of this information is gathered, it’s time to digest it. It’s important to align your logging infrastructure to log type and retention needs — so you don’t end up inserting a large amount of unstructured data that you will need to be able to quickly search in an SQL database, for example. Most organizations have multiple clouds, many different devices that generate different log types and separate required analysis methods. In other words, one solution usually does not meet all logging needs.

3. Implement multi-tier logging

If, after analyzing your logging requirements, you find that one logging strategy does not meet all of your requirements, consider this tiered logging flow:

Code42 Tiered Logging Flow Example

In this example logging flow, there are three different logging flow types and five different log repositories. There are SIEM logs, application logs and system log flow types. The repositories are the SIEM database, ELK (elasticsearch, logstash and kibana) stack, two long-term syslog archival servers and cloud storage. The repositories each have a unique role:

  • The SIEM correlates logs with known threats.
  • The ELK stack retains approximately 30-60 days of logs for very fast searching capabilities.
  • The two syslog archival servers store the last three to seven years of syslog and application logs for historical and regulatory purposes. One syslog archival server is used for processing logs, the other is a limited-touch, master log repository.
  • Cloud storage also stores the last three to seven years of logs for historical and regulatory purposes.

Simplify your log activity

This is just one quick example of an innovative solution to simplifying log activity. Regardless of whether multi-tier logging is the right solution for your organization, the most critical step is making sure you have a clearly defined logging strategy and an accurate baseline of your current logging state. This basic analysis gives you the understanding and insights you need to simplify log activity — making it easier to accomplish the complex logging goals of your organization.

Code42 13 Tips for Situational Awareness

Tips From the Trenches: 13 Situational Awareness Questions

A key aspect of responding to security events is situational awareness: knowing what is happening in your environment and why. Standard data security tools like firewalls, proxies, email filters, anti-virus reports and SIEM alerts are all common sources of data for situational awareness. However, it’s also important to have visibility into business operations. Only with a holistic view of your entire organization can you have true situational awareness.

For example, as a software company, writing and deploying software is a significant and complex part of our business operations. Naturally, this work is supported by development, test and staging environments, which are used by our engineers to create and test product features. Security teams need to be aware of all non-production environments in their organizations. Open non-production environments (or environments that re-use credentials between production and non-production systems) can be a vulnerability that attackers can exploit.

“ No matter what business your organization is in, you should know where your important data can be found as well as what activities are normal and what is not normal. ”

Asking questions is the key to knowledge. Here are 13 questions I have used to help paint a full view of internal operations at Code42. They are divided into four separate categories based on major categories of concern for most organizations. I hope they will help you improve your situational awareness and overall data security.

Development Environments:

  1. Where are your development environments?
  2. Do you have the appropriate level of logging in those environments?
  3. How is access handled and are there controls that prevent the reuse of credentials across environments?
  4. Are there forgotten dev environments that need to be cleaned up?

Build Process:

  1. Where is your code built?
  2. Where is your code stored?
  3. If somebody maliciously inserted code into your environment, would you be able to detect who, when and what?
  4. Where are your build/CICD servers?

Deployments:

  1. Do you know what your typical deploy schedule is?
  2. Are you involved in the change management process and other governance bodies so you know when major changes are occurring in your environment?

Decommissioning:

  1. What systems and environments are going away?
  2. Is there a plan to keep information such as logs from those environments after the environment itself goes away, in accordance with your data retention policies?
  3. Will any infrastructure be reused, and if so, has it been processed properly?

While these questions are specific to software development and deployment, the data security issues they raise are relevant to businesses of all types. No matter what business your organization is in, you should know where your important data can be found as well as what activities are normal and what is not normal. Ensuring that tools are in place to answer these questions is vital.

Here’s one tool I use to answer these questions in our environment: Code42 Forensic File Search. It provides the visibility I need into all activity in our organization. With it, we can quickly and accurately take stock of data movement, data security risks and countless other activities. It makes it easier and faster to know what is happening in our environment and why. It provides the situational awareness that is critical for any modern organization.

Until next time, happy threat hunting!

Why Local Deduplication Is the Key to Faster Restores

Tips From the Trenches: Hunting Endpoint Threats at Scale

A big part of “walking the talk” about proactive data security here at Code42 is our “Red Team vs. Blue Team” internal simulations. Today, I’d like to share a few ways I’ve used the Code42 Forensic File Search API to give me completely new threat-hunting capabilities during these exercises.

Endpoint devices are still one of the big blind spots for the modern threat hunter. It’s often nearly impossible to search files on endpoints that are offline or were reimaged due to an incident. This is one reason I’m so excited about the Code42 Forensic File Search API: it doesn’t suffer from this limitation; it truly sees every version of every file on all endpoints, whether online or offline. And since we use our backup product, we also have every file that ever existed.

“ Leveraging Code42 Forensic File Search, I’m able to identify potentially unwanted applications that have slipped past antivirus and other traditional security tools. ”

Locating EXE files in download directories

Leveraging Code42 Forensic File Search, I’m able to identify potentially unwanted applications that have slipped past antivirus and other traditional security tools. To find these previously undetected threats, I’m forwarding output from the Code42 Forensic File Search API (hashes) to the VirusTotal Mass API for further enrichment. Here are some of the high-value searches I’ve used within Code42 Forensic File Search, along with the corresponding JSON files for reproducing the searches in your environment:

  • Search all macro-enabled Word documents
  • Search all DLL files in download directories
  • Search all Dylib files
  • Search all DMG files in download directories

Parameters for customizing FFS search results

Once you have your raw JSON results, here are a few parameters I’ve found useful in customizing Code42 Forensic File Search queries:

  • fileName:The fileName parameter can take a wildcard with a file extension at the end to list all DLL files in this example:   {“operator”:”IS”,”term”:”fileName”,”value”:”*.dll”},
  • filePath:Another useful parameter for searches is the filePath parameter, especially when you are searching for filetypes typically found in specific locations. The example below captures the Windows download directory of all users, as well as all paths below the downloads directory — hence the two wildcards: {“operator”:”IS”,”term”:”filePath”,”value”:”c:/users//Downloads/“}

Hash-check best practice

After you have configured your JSON file, the Code42 Forensic File Search search results should look something like this: 

Python ./ffs_search.py –username –search_type raw –in_file ./hunt.json –out_filter md5 | awk ‘!seen[$0]++’ | tr -d ‘”, []’ | sed ‘/^\s*$/d’

With an output that appears below:

Code42 Security Tips from Trenches Hash-check

Piping the results to awk and tr simply removes duplicate MD5 hashes and cleans up the JSON output, so you avoid the cost of submitting the same MD5 hash to a service like VirusTotal multiple times. Once we have the hashed file results, we can search those hashes across any threat intel or data enrichment tool.

One quick note: The public VirusTotal API key is rate-limited to four queries a minute. I would recommend using a private API key, since searching across hundreds of unique hashes can take quite a long time.

Code42 Security Tips from Trenches Hash-check 2

In our case, we leveraged Virustotal-api-hashcheck to give us a user-friendly view of the hashes we’re seeking. There are many VirusTotal API tools on GitHub and you can use whichever one suits your use case.

Finding malicious files—examining your exposure

In my example, while searching for Excel documents, we uncovered one malicious result that ties back to a document lure that contained a zero-day exploit being used in a targeted attack as discovered by icebrg. You can read more about the specifics of the file on their website.

Code42 Security Tips from the Trenches Hash Analysis 3

I then took the VirusTotal results and searched back in FFS to determine the extent of our exposure. Fortunately, the malicious file was only on two researchers’ systems, and we confirmed that they had been using the file for analysis and demonstration purposes.

Code42 Security Tips from Trenches Forensic File Search

Leveraging Code42 Backup + Restore for file analysis

I’ve also leveraged Code42 to recover unknown files for automated (sandbox) or manual analysis. In the previous example, there was one Excel document that VirusTotal didn’t recognize:

Code42 Security Tips from Trenches Backup Restore

Instead of uploading a potentially sensitive file to VirusTotal, I can do initial triage and analysis by recovering the file with the Code42 application and uploading it to my sandbox analysis tool. Below is a screenshot of the XLSM file running in a sandbox:

Code42 Security Tips from Trenches Virus Total

After doing initial triage and analysis, the file looks safe and not sensitive. At this point, the file could be uploaded to VirusTotal or kept private.

I hope this article has given you a few ideas of how you can use the Code42 Forensic File Search tool to gain powerful new threat-hunting capabilities in defending your organization. Since I first began using the tool, I’ve continually discovered new ways to gain greater visibility in detecting threats. I hope you’re as excited as I am about the current and future ways that security teams can leverage Code42 Forensic File Search internally to enhance security at scale.

Happy threat hunting!

Tips From the Trenches: Choosing a Security Orchestration Tool

Tips From the Trenches: Choosing a Security Orchestration Tool

Like most of our customers, we here at Code42 are constantly looking to enhance our efficiencies when it comes to security. As we use more technology in our environment, that means more log sources, more events and potentially more alerts. It also means we have more opportunities to gather information from disparate sources and put together a more complete picture of the events we do investigate.

Five ways security orchestration tools can help

To help simplify and automate those activities, we are turning towards security orchestration tools. There are many reasons to invest in an orchestration tool. But for us, the following five items are the most important:

  1. Case management: As our team has grown, delegating work and tracking who is working on what becomes increasingly important. An orchestration tool can ideally function as that single workspace for assigning, managing and closing tasks.
  2. Metrics: Closely related to the first item on our list, better management of workload can improve visibility into key metrics like SLAs, as well as make it easier to identify bottlenecks and improve efficiency in analyst workflows.
  3. Integration: We’re constantly testing and adding new security tools, so it’s critically important that an orchestration tool easily integrates with tools we not only are using now but also may add in the future. The less time we have to spend developing integrations, the more time we have for investigating anomalies.
  4. Automation: Of course, automation is the name of the game when it comes to an orchestration tool. Automation allows our team to dream up new ways to streamline data collection and enrichment. Automation also can find connections that we may miss when manually reviewing data.
  5. Value: Analyst time is always in short supply. When a tool does the first four things on this list well, it means our security team can spend less time on low-value work—and more time on important analysis tasks. The more a tool allows us to focus on analysis, the more value it brings to our team.

A page out of the Code42 security orchestration playbook

The right orchestration tool also will allow us to leverage our own Code42 application in exciting new ways. Here’s just one example from the Code42 orchestration playbook:

  • Step 1 – Automatically locate files: To determine the scope of an event and show us how many endpoints have a suspicious attachment, we can search for a specific MD5 hash using Code42 Forensic File Search.
  • Step 2 – Restore deleted files: In situations in which the original file has already been deleted, Code42 Backup + Restore allows us to automatically restore that file.
  • Step 3 – Investigate suspicious files: With all the suspicious files identified (and restored, if necessary), we can now conduct analysis via an orchestration tool—such as running it in a sandbox. Best of all, because we didn’t spend hours or days manually locating and restoring files, we can focus all our time on the critical analysis.

This really is just the tip of the iceberg when it comes to use cases for security orchestration tools—whether it’s leveraging Code42 functionality or any of our many other security tools. As we continue our investigation into security orchestration tools, we’ll share more useful integrations and some automation playbook ideas.

Stay tuned for more updates—and as always, happy threat hunting!

Finding Files in the Wild: From Months to Hours

Every day, your organization faces a variety of data security challenges. Many come from outside your organization, but a significant number also come from within. There are countless reasons why someone may take sensitive data from your organization, many of which are purely opportunistic. For example, what if a file with sensitive financial information is mistakenly emailed to an entire company? That may prove too tempting an opportunity for some. How can your organization respond when this happens? In this post, I’ll discuss how the response process often works today—and how it can be streamlined with Code42 Forensic File Search.

A true story

Here’s a true story of an IT team working through just such a challenge: At this organization, the HR team used Microsoft Excel for management of financial information such as bonus structures and payout schedules. By mistake, a member of the team sent an email containing an Excel file with compensation information for the entire staff to the whole company, instead of the select few who were supposed to receive it. Over 6,000 employees worldwide received the email.

Fortunately, the most sensitive information was contained on a hidden tab in the Excel file, and most employees never even opened the file. The IT team was able to recall the email, but the legal team needed to know who in the company had downloaded and opened it, in case the information within was ever used in a lawsuit. The IT and Security teams were tasked with finding every copy of the file in the organization.

A painful two-month process

While recalling the email cut the number of potential endpoints to search to around 1,000, the IT team still had to search all those devices—many of which belonged to individuals at the organization’s international offices. The IT team used a Windows file searching utility to crawl the user endpoints in question, searching for the name of the file. However, Outlook’s email client can scramble names of files, so the IT team also had to scan for any Excel file in the Temp folder of each machine, and open those files to visually confirm that it wasn’t the file in question.

Each scan would take between one and eight hours, depending on the size of the drive—and the scan could only be run when the target endpoint was online. If a laptop was closed during the scan, the process would have to be restarted. If a device was located in an international office, the IT team would have to work nights in order to run the scan during that office’s working hours.

The process was a tremendous hit to productivity. The IT team tasked fully half its staff to running the scans. Two of the organization’s five security team members were tasked with overseeing the process. Even the legal team’s productivity was affected. Since the IT team had to open every version of the file to verify the sensitive financial data within, the legal team had to draw up non-disclosure agreements for every person working on the project.

All told, the search for the mistakenly distributed financial file took the organization two months, and the IT team estimated that they had only recovered 80 percent of the instances of the file.

“ With Code42 Forensic File Search, administrators can search and investigate file activity and events across all endpoints in an organization in seconds. ”

A better way: Code42 Forensic File Search

Fortunately, there is a better method for locating critical files in an organization. With Code42 Forensic File Search, administrators can search and investigate file activity and events across all endpoints in an organization in seconds. In the case of this Excel file, the IT team could have used Code42 Forensic File Search to search for the MD5 hash of the file. By searching for the MD5 instead of the file name, Code42 Forensic File Search would locate all instances of the file across all endpoints, including versions that had been renamed in the Temp folder or renamed to intentionally disguise the file. This single search would find all copies of the file, even on endpoints that are offline.

The feature video demonstrates Code42 Forensic File Search in action. The IT team member that shared this story is confident that it would have played out very differently with Code42 Forensic File Search. “Had we had Code42 Forensic File Search deployed, that project was probably done in a couple hours,” he said. “We would have cut two months to a couple hours.”

Code42 Tips From the Trenches: Automating File Scans and Alerts

Tips From the Trenches: Automating File Scans and Alerts

Welcome to the first post of our Tips from the Trenches blog series. Authored by the Code42 security team, the series will explore some of the industry’s latest data security tools and tricks.

One of the best parts of working on the Code42 security operations team is that we’re facing (and solving) many of the exact same challenges as our customers. That means we get to share our experiences and trade tools, tips and tactics for what works—and what doesn’t. With that in mind, here are a few of the cool new ways we’re using search to identify hidden threats before they turn into big problems.

Better criteria for automated scanning and alerting

We’ve got a couple of tools set up to constantly scan our digital environments for risks. Recently, I created a new tool in Python that helps us go deeper with that scanning and alerting—searching via MD5 hash, hostname and filename, to name a few. This scriptable interface to the Code42 Forensic File Search API also allows for use of the full API by accepting raw JavaScript Object Notation (JSON) search payloads, meaning searches are only limited by the imagination of the user.

“ The scriptable interface to the Code42 Forensic File Search API also allows for use of the full API by accepting raw JavaScript Object Notation (JSON) search payloads, meaning searches are only limited by the imagination of the user. ”

Identifying macro-enabled Office files—a common malware source

One sample JSON search payload is the repo searches for macro-enabled Office files in users’ Downloads directories, such as *.docm and *.xlsm files—some of the most common vectors for malware. With the new tool, an automatic search alerts us when new files arrive on endpoints, so we can take action—such as sending the MD5 hash to a service like Virus Total to get a report, or even retrieving the file and sending it to a malware analysis sandbox if necessary.

Snuffing out WannaCry threats

We’ve done some early integration work to test combining Code42 Forensic File Search with a threat intel feed. This will allow us to search and detect malicious files based on MD5 hashes sourced from paid or open-source intel services.

Sharing new threat search tools and tactics

Like you, we’re dealing with new and evolving threats on a daily basis here on the Code42 Security Operations team. We’re constantly looking for new ways to use the tools we have to search and detect threats in smarter, better ways. All of the new search tools I mentioned above are available on our public Github site: https://github.com/code42/ffs-tools.

Live Q&A

Have questions about using Code42 Forensic File Search? Senior Product Manager Matthias Wollnik and I will be fielding questions live on Tuesday, July 24 from 10:30-11:30 am US Central time in the Code42 community.

Keep an eye out for more Tips from the Trenches coming soon—until then, happy threat hunting!

Evolution18: Cybersecurity Investigations: From Hollywood to the White House to Your Company

Cybersecurity Investigations: From the White House to Your Company

Cybersecurity investigations, from Hollywood fiction to your company’s reality, were one of the key topics at Code42’s annual Evolution18 customer conference April 9 – 11 in San Francisco.

While the Hollywood version of cybersecurity investigations includes a geek sitting in a dark room in front of a computer screen having an “aha!” moment, real-life cybersecurity investigations are incredibly complex. Guest keynote speaker Theresa Payton should know. In addition to being a national data security expert and former White House CIO, Payton is a star of the CBS series “Hunted,” a reality TV show about cyber investigators who hunt down people living off the grid.

“If solving our cybersecurity and privacy issues were as simple as following security best practices, we would all be safe,” Payton said in her keynote presentation on April 11, striking a serious note between regaling the audience with stories from her time in the White House and her experience on reality television. “In spite of talented security teams and hours of security training, breaches still happen. That’s why I want you to consider a new path to security, that of ‘designing for the human.’ This is the path that recognizes that humans can and do make mistakes, and plan for a way to respond when that inevitably happens.”

To that end, Evolution18 also marked the release of Code42 Forensic File Search, our latest enterprise security product. Code42 Forensic File Search reduces the time it takes to investigate, respond to and recover from data security incidents. By collecting file metadata and events from employee endpoints and making them searchable via the cloud, Forensic File Search provides comprehensive answers to challenging data security questions. It tackles tough questions like:

  • Does known malware have, or has it ever had, a foothold in our environment?
  • Has a particular crypto-mining agent been installed on our employees’ computers? Who has it now?
  • What files did an employee download or delete in the months before resigning?

“Responding to cyberattacks takes too long, exposing organizations to greater risks and climbing costs,” said Joe Payne, president and chief executive officer of Code42. “By collecting, analyzing and indexing file events, Code42 Forensic File Search helps organizations shrink time-to-response windows. Our new product provides visibility to where data lives and moves across all endpoints in seconds.”

Other highlights of the conference included these sessions:

Whether through ignorance or malintent, employees are one of the top data security threats to any organization. Code42 Senior Director of Information Security Jadee Hanson provided a behind-the-scenes look at running an insider threat program to prevent employees from leaking, exposing and exfiltrating data. Hanson also moderated a “Futurist Discussion” panel with industry leaders to discuss what’s on the horizon for data security. Thanks to these panels, attendees learned what’s to come in the near future of cybersecurity, what will become the latest buzzwords in the field and much, much more.

Minimizing user downtime during a device migration is a critical part of any IT strategy. Best practices discussed during a packed-room panel session with Code42 customer MacDonald-Miller include self-service migration and data restoration as well as managing user expectations.

Data compliance for higher education can be complex, particularly considering that federal grant requirements now require data retention for seven years. In a game show format complete with swag prizes and packs of ramen noodles (Get it? Higher ed? Ramen?) customers learned how to tell if data is leaving the university and how to build Freedom of Information Act requests into their data management issues. Yum.

The reviews from attendees are already rolling in, and we’re blushing at the positive responses:

“Love hearing what other Code42 customers are doing!” – desktop support specialist, Entrust Datacard

“Thanks to everyone at Code42 for putting on such a terrific conference! I’ve already started reaching out to my peers–encouraging them to attend Evolution19.” – IT program manager, University of Colorado Skaggs School of Pharmacy and Pharmaceutical Sciences

“Already registered for Evolution19 and looking forward to Colorado. See you guys then!” –systems administrator, Utex Industries

“Thank you, Code42! It was a great Evolution.” – computer support analyst, Stanford University

Dozens of customers have already signed up for Evolution19. Interested in joining them? Registration for next year’s conference, April 30 – May 2 in Denver, is open now. We hope to see you there!

Facebook Twitter Google LinkedIn YouTube