The Five Big Themes I’ll Be Looking for Next Week at Black Hat

If there was one annual event that encapsulates cybersecurity, it’s Black Hat. For more than 20 years, thousands have gathered to learn security during the Black Hat training sessions and see cutting-edge research on display at the Black Hat Briefings. Black Hat has been doing this every year in Las Vegas since 1997. That’s right about the time enterprise data security started maturing into widespread practice. Over the years, the crowds have grown, and so has the importance of data security. 

Every year at Black Hat, I try to keep an eye out for different trends. These are themes that I believe will be important and drive a lot of the conversation at the conference, not to mention the months that follow. Here’s what I’m looking at this year:

“ What piques my interest about insider threat isn’t just the number of attacks perpetrated by insiders; it’s about how damaging insiders can be to an organization. After all, insiders know where the data is and what data is valuable. ”

The insider threat

There have been several recent news stories that highlight insider threat and it’s no fluke that they dominate the news cycle. Insider threats are up 50 percent in the past four years alone. Recently, we learned about the McAfee employees who quit and were sued for allegedly taking intellectual property to a competitor. Then there was the SunPower exec who emailed himself highly sensitive trade secrets. And the Desjardin employee who accessed the data of nearly three million bank customers. Earlier this year, the Verizon Insider Threat Report found that 20 percent of cybersecurity incidents originated from trusted insiders and often went unnoticed for weeks, months, and even years. 

What piques my interest about insider threat isn’t just the number of attacks perpetrated by insiders; it’s about how damaging insiders can be to an organization. After all, insiders know where the data is and what data is valuable. I’ll be looking for lots of conversations in this area, and new insights into ways to better detect and respond to insider threats before IP is gone and the damage is done.

The increased importance of DevSecOps

The popularity of DevOps keeps growing. According to Allied Market Research, the global market for DevOps tools was nearly $3 billion in 2016 and is expected to reach over $9 billion by 2023 — growing at a healthy 19% annual clip. Yet, enterprises have a challenge when it comes to incorporating security into the DevOps application development and management processes. That’s what DevSecOps is all about. I think we’re going to hear some great advice and ways to maximize the incorporation of strong security practices into DevOps.

Insight into the emerging threat landscape

We always look toward finding a fresh perspective on the threat landscape at Black Hat. The conference presenters are always examining new attack methods in detail. This year will be no different, and I’m expecting to see interesting approaches to attacks via social media and insider threat exploits.

Latest trends in Zero Trust security

Zero Trust has moved from buzzword to reality, but we’re just beginning to see organizations move beyond superficial Zero Trust implementations. I expect the conversations around Zero Trust, a concept of security centered on the belief that companies shouldn’t trust anyone or anything inside or outside their perimeters, and instead must verify and monitor anything and everything trying to access company data, to become more meaningful and results-based. This will continue to be an interesting and compelling topic in the months following Black Hat.

A deep look inside a few interesting security vulnerabilities

At Black Hat, if you don’t make it to a few sessions where they dive deep into a security flaw or exploit, you’re really missing out. These sessions are eye-opening, heart-stopping, and mind-jarring to see. It opens your eyes to the ways in which people make new inroads to devices, hack into large enterprises, and leverage vulnerable software to do it silently.

I’m also going to keep a lookout for new buzzwords and emerging attack trends. For instance, we already see the rapid rise of deepfake movies. And let’s face it, these videos are getting incredibly good, thanks to sophisticated algorithms that create unprecedented reality. Soon, we’ll have issues trusting our own eyes and ears and their ability to discern what is real. This will be fun to see take shape this year.  

Finally, we all know that the IT industry is increasingly turning to artificial intelligence (AI) and machine learning to help secure our increasingly complex environments. But when it comes to new security technologies, it’s a bit of a double-edged sword. What can be used for our defense can also be used to attack us. AI is no different, and in the near future, we’re going to see AI used more commonly to attack enterprises. AI-based attacks are on their way. You can count on it.

Securing Your Software Supply Chain Code42 Blog

Securing Your Software Supply Chain

Software supply chain attacks have hit the news in a big way. In March, hardware maker ASUSTeK Computer, or ASUS, found its auto-update process hijacked to deliver malware and more than a million users may have downloaded a backdoored version of the company’s update software. 

Concerns about these types of attacks are growing. In recent years, we’ve witnessed attackers increasingly leveraging software supply chain attacks to do things, such as corrupt PC utility software and collaborative development tools. 

Supply chain attacks are different from other cyberattacks in a number of ways. In addition to being sophisticated, successful attacks have the ability to impact thousands to millions of users in ways few cyberattacks can. Then there’s the rising complexity of software. Software vendors today are making software self-updating and even self-healing. Because of all this, and the increasing amount of open source and third-party software in use, I expect the supply chain attack vector to become more common.

With this in mind, it’s important to understand the steps your software makers and providers are taking to protect the software and systems they provide you. 

“ One must take the best of precautions, such as conducting due diligence on hardware and software providers and ensuring that they do what they can to keep their systems and customers secure. ”

For instance, to help ensure the integrity of our software, we take a number of precautions here at Code42. We protect our systems with defense-in-depth, and we monitor the integrity of our files. We also encrypt our software certificates, and we make sure they are safe and well protected. We maintain strong file validation, to mitigate the risk that an attacker might inject something nasty and try to deploy software while posing as us to our customers.

Still, these types of attacks are very humbling for security professionals. They highlight the stark reality that no matter how many precautions one takes, everyone is still part of a chain of technology and reliant on third-parties. And if anyone in that chain of technology and services gets compromised, you are also now at significant risk of compromise. One must take the best of precautions, such as conducting due diligence on hardware and software providers and ensuring that they do what they can to keep their systems and customers secure.

While there’s certainly no guarantee of success, there are things one can do to approach the security of your software supply-chain.

First, I’d like to say, broadly, is that you should generally trust your software vendors. When a software provider publishes updates, there is a good reason. Good software development, especially one that includes software security, is a process — a process that certainly doesn’t end when software ships. In fact, the time to be concerned about trusting software vendors is if they’ve never reported a vulnerability. If not, there’s a good chance that they are not being transparent, or they are not looking closely enough. I don’t know which is worse. 

It’s also important to make sure that your software providers engage in secure software best practices. When issuing updates, are they signed? Are application bundles and libraries signed? Do they have a functioning vulnerability reporting process and publicly posted policy related to security patches? Make certain these things are in place.

Finally, don’t think it’s smart to block or skip updates. You could actually “denial-of-service” yourself by blocking updates because your software could stop properly functioning without new code. Some organizations think blocking updates helps improve their systems stability. It doesn’t. If your change controls are too rigid, they need to be updated so that software updates can be tested and then rolled out efficiently. Additionally, software compliance as well as government and industry regulations likely mandate that systems be kept up to date. 

When it comes to defending an organization against software supply chain attacks, it’s crucial that not only security best practices be closely followed, but one also needs to hold the seemingly contradictive assumption that nearly two-thirds of IT security professionals believe a successful cyberattack is imminent in 2019. This is why, in addition to the usual good user authentication practices, data backups, system and network segmentation, and anti-malware, it’s crucial to monitor for file integrity and mysterious traffic patterns. That means making certain that systems and data are persistently monitored for potentially malicious activity, such as unauthorized data exfiltration and other shenanigans. 

That’s certainly not a panacea. But the reality is there isn’t one. Still, every organization needs to be proactive and take the steps necessary to identify any anomalies underway in their environment. And they need to make sure their software providers are, for their own part, taking an aggressive stance themselves when it comes to software security and protecting themselves, and therefore their customers, from attack.

While security-savvy organizations have long thought about the nature of the security of the software they install, it’s time they also think more about the software update process from each of their vendors, and continue to do so for as long as it’s being used.

Code42 Blog

Improved Risk Management Through Better Data Insights

Let’s face it: security professionals are overrun with data. Their logs are brimming with it. Their security tools are continually alerting them to potential anomalies, attacks, new vulnerabilities, changes in system configurations and all of the other things that could put enterprise data at risk. It’s safe to say that when it comes to data, security analysts and administrators are beyond overwhelmed. However, when it comes to business executives, the opposite is true: they often aren’t getting the information they need to assess what type of risk their organization’s data is under. 

The problem is, without the right data — data specific to their roles in the organization — neither security analysts nor business leaders can make effective risk management decisions regarding their corporate data. With version 7 of our Code42®Next-Gen Data Loss Protection solution, we’re tackling that challenge head-on. The goal is to get the right type of information, in the right amounts, at just the right time to those who need it so they can make the best decisions they can relevant to their job. 

“ The problem is, without the right data — data specific to their roles in the organization — neither security analysts nor business leaders can make effective risk management decisions regarding their corporate data. ”

What do I mean, exactly, when I say security professionals get too much data and business executives not enough? I’m talking about a signal to noise ratio: security pros typically get flooded with so much data that they have a challenging time finding the risks they need to focus on, yet business executives get so little relevant security information that they can’t make effective data-driven decisions. 

This can, of course, have profound deleterious effects on security. Bad decision making driven by poor access to the right information will negatively impact regulatory compliance; the protection of intellectual property, business plans and confidential customer data. When it comes to security analysts, if they can’t see the data they need to take immediate steps to mitigate danger, then breaches will go unnoticed until it’s too late. It’s one of the reasons enterprise data breaches, more often than not, go undetected for months. To be specific, the latest research tells us it takes an average of 49.6 days to detect a breach, which is up year-over-year. 

Code42 is taking steps to eliminate these barriers to effective security. At Evolution19, we are announcing a series of enhancements when it comes to our alerts, reports and dashboards within our Next-Gen DLP solution. 

“ At Evolution19, we are announcing a series of enhancements when it comes to our alerts, reports and dashboards within our Next-Gen DLP solution. ”

These improvements will help business leaders get the precise information they need about data risks lurking within their organization. Of course, we will also be providing numerous enhancements needed by front-line analysts to do their jobs more effectively. 

These efforts tightly align with Code42’s belief that security’s ability to be successful is directly tied to their ability to quickly detect and respond to data threats. As such, our goal is to demonstrate that security products can be both powerful and easy to use. That’s why we designed our Next-Gen Data Loss Protection solution with ease-of-use in mind. Customers don’t have to spend their time writing complex DLP rules and policies to reduce data risk like they do with traditional DLP — and now we are making it easy to get actionable information whether one is a security analyst or business leader.

What do I mean when talking about security analytics for business leaders? I’m talking about providing them with the insights they need to understand where the data-related risks hide within their organization. This includes attributes such as where their data resides, where it may be inadvertently exposed and show them how and where users are moving that data around the organization. We also will provide other high-level views about their data so they can make better decisions about managing their data, determining their risk level and even investing in security defenses more effectively.   

“ I’m talking about providing business leaders with the insights they need to understand where the data-related risks hide within their organization. ”

I’ll give you some examples. With these enhancements, business leaders will be able to see not only how many files are shared outside of the organization, but also the kinds of data being shared outside the organization. It will reveal how many file exfiltration events are occurring within your environment and show trends and patterns in data movements that business leaders should know.

Let’s consider insider risks. Often when we think of insider risks, the first thing that comes to mind is the nefarious insider. The insider stealing data to sell to competitors, or to take intellectual property to their next job. Employees acting maliciously isn’t the only cause for concern, though. Sometimes employees simply are careless, or make unintentional or uneducated mistakes. They may not follow the rules around data protection because they’re not convenient, or they may not even be aware of what the rules are.  In all cases, it’s crucial that the organization is aware of trends in data usage and movement so that corrective and mitigative actions can be taken. 

Of course, we are prioritizing enhancements that also will help security admins get a better signal when it comes to data visibility. This includes improved alerting so that security analysts and managers will be sure to see the security-related situations they need to investigate. While we have always provided security managers information about where all of their data resides within their environment, where their files are located, and how that data travels, in the future we will provide them with alerts that will bring potentially risky situations to their immediate attention. Situations like:

  • When a file has a shared link that allows public access to an internal file.
  • When a file is shared publicly and indexed on the internet.
  • When a user copies files to removable media.
  • When a user syncs a file to a cloud service.
  • When user browsers or applications read a file from a device.

That’s a lot of powerful information and will help organizations go a long way in reducing their data security risks.

This is an exciting time for us at Code42; we continue to evolve our Next-Gen Data Loss Protection solution. It’s so rewarding to see all of our efforts come to fruition and I can’t wait to see how our customers put these new capabilities to use.

Successful Software Security Training Lessons Learned Code42 Blog

Successful Software Security Training Lessons Learned

How enterprises build, deploy and manage software apps and applications has been turned upside down in recent years. So too have long-held notions of who is a developer. Today, virtually anyone in an organization can become a developer—from traditional developers to quality assurance personnel to site reliability engineers. 

Moreover, this trend includes an increasing number of traditional business workers, thanks to new low-code and so-called no-code development platforms. These platforms are making it possible for these non-traditional developers, sometimes called citizen developers, to build more apps the enterprise needs. Of course, whenever an organization has someone new developing code, it creates a situation that could potentially create new security, privacy and regulatory compliance risks. 

“ Recently, at Code42, we trained our entire team, including anyone who works with customer data, to ensure everyone was using best practices to secure our production code and environments. ”

For most organizations, this means they must reconsider how they conduct security awareness and application security testing. Recently, at Code42, we trained our entire team, including anyone who works with customer data. This comprised of the research and development team, quality assurance, cloud operations, side reliability engineers, product developers and others to ensure everyone was using best practices to secure our production code and environments. 

We knew we needed to be innovative with this training. We couldn’t take everyone and put them in a formal classroom environment for 40 hours. This isn’t the best format for many technologists to learn.

Instead, we selected a capture the flag (CTF) event. We organized into teams who would be presented with a number of puzzles designed to demonstrate common vulnerability mistakes, such as those in the OWASP Top 10. We wanted to create an engaging hands-on event where everyone could learn new concepts around authentication, encryption management and other practices. 

We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. Watching the teams work through the CTF was fascinating because you could see their decision-making processes when it came to remediating the issues presented. For problems where a team wasn’t sure of the solution, we provided support training materials, including videos.

“ We had to create content that would challenge and yet be appropriate and interesting for everyone, including the engineers. It wasn’t easy considering each of the teams use different tools and languages and have skillsets that vary widely. ”

While the event was a success overall, we certainly learned quite a bit that will create a better experience for everyone in our next training. 

Let me say, the CTF style was exceptional. The approach enabled individuals to choose areas they needed to learn, and the instructional videos were well received by those who used them. But I’ll tell you, not everyone was happy. About three-quarters of my team loved it, and then the other quarter wanted to grab pitchforks and spears and chase me down.

First, throughout the contest, a lack of a common development language proved to be a challenge. Within most of the teams, the engineers chose the problems that were in a language with which they were familiar. It often cut the quality assurance or site reliability engineers out from helping on that problem. No good.

Gamification, while well intended, caused problems.  As I mentioned, we had instructional videos. That way, if a team didn’t know the answer to a problem, they could watch the videos, get guidance, and learn in the process. This added a time element to the project, which actually caused individuals to skip the videos.

How we implemented the leaderboards proved counterproductive. Remember how we all (well, many of us) feared being the last person picked in gym class growing up?  Well, leaderboards shouldn’t be present until the game ends, and even then to summarize only the top finishers. No one likes to know they were in the bottom 10 percent, and it doesn’t help the learning process.

Dispel the fear. These are training and awareness classes. While official and credited security training is often a pass/fail outcome, this awareness training is for education. However, our employees feared their performance would somehow be viewed as bad and could affect their performance reviews—or employment. Face these rumors up front and make it clear the CTF results aren’t tied to work performance.  

Overall, our team did learn valuable lessons using our CTF format — the innovative approach we took to educate them was successful in that way. But next time I hold a contest, we will definitely incorporate changes from the lessons above. And I’ll work harder to strike the balance between formal lecture and class setting versus competitive event when there are developers that present with varying experience and skillsets.

  

Rob-Securing-Data-in-Cloud-Chaos-Code42-Blog

Securing Data in Cloud Chaos

To succeed, every enterprise depends on data and the insights that can be gleaned from that data. Enterprises today are creating much more data than in prior years—much of it critical to their digital transformation efforts. And how this data is stored within enterprises has changed dramatically, which is having a profound impact on how that data must be secured.

How so? At one time, most enterprise data resided within enterprise databases and applications, and these applications remained (relatively) safely on enterprise endpoints or tucked back in the data center. Not anymore.

“ Gartner estimates that 80 percent of all corporate data today is actually stored unstructured. ”

That was the age of structured data. Today, data is more likely to be stored unstructured and reside in the form of word-processing files, spreadsheets, presentations, PDFs and many other common formats. The research firm Gartner estimates that 80 percent of all corporate data today is actually stored unstructured.

This means today our enterprise data is scattered everywhere. And just because it’s not structured within an application doesn’t mean the data isn’t critical – unstructured data today includes financial information, trade secrets, marketing plans and work with contractors and business partners. Not all of this data is the same nor is it managed in the same way — yet this data must be protected.

How we share unstructured data is also changing. No longer is data sent merely as email attachments. Today, data is shared through social media programs, cloud apps and communication platforms, such as Slack. In many organizations, staff are sharing sensitive data, such as consumer information, intellectual property, prospect lists, financial data and the like. Security teams need to be alerted when sensitive information is shared.

These trends should cause pause within anyone concerned about securing their enterprise information.

“ One of the most important steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. ”

According to our 2018 Data Exposure Report, 73 percent of security and IT leaders believe there is data in their company that only exists on endpoints and 80 percent of CISOs agree that they can’t protect what they can’t see. Seventy-four percent believe IT and security teams should have full visibility over corporate data.

Unfortunately, without a dedicated and continuous focus on securing unstructured data, such visibility won’t ever exist. Only chaos. 

Yes, most organizations take reasonable steps to protect their applications and databases from costly data breaches. They invest in endpoint technologies that protect their users’ endpoints from malware. They focus on database security, application security and related efforts. And they try to control access to their local enterprise network. But the challenging reality remains: even if an organization executed perfectly on such a security architecture, it would still leave itself open to a vast amount of data theft and exploitation. The reason is that organizations are ignoring roughly 80 percent of their data that exists unstructured. 

Legacy security methods haven’t kept pace

It’s critical enterprises get the security of their unstructured data right. Securing unstructured data is different than securing data stored within applications and databases. 

One of the most important, and likely first, steps for any organization that wants to start proactively securing their unstructured data is to determine where that data resides and then find viable ways to protect that data. Other capabilities they’ll need in place will include monitoring who has access to that data, the ability to index file content across storage devices, cloud storage, and cloud services, and monitor that data for potential data loss, misuse and theft.

Having these capabilities in place will not only help organizations to better secure that data and identify careless handling of data or even malicious insiders, but also improve the ability to conduct in-depth investigations and identify threats, preserve data for regulatory compliance demands and litigation situations, and rapidly recover lost or ransomed files.

The fact is that unstructured data is 80 percent of enterprise data today, and the places it’s being stored are expanding. It’s imperative you give it the appropriate level of focus. While you can’t put the unstructured data back in the centralized data center again, you can bring a structured approach to data security that will reign in the chaos and adequately protect your enterprise information.

We Are All Surfing with the Phishes

We Are All Surfing with the Phishes

Phishing is in the news again – and for good reason. Last month, the story first came to light regarding a “megabreach” drop of 773 million email and password credentials. At first, this disclosure made a sizable splash. But as researchers dug in further, it turned out the dump of online credentials had been circulating online for some time, as independent security journalist Brian Krebs covered in his blog, KrebsonSecurity. Maybe the news wasn’t as big of a deal as we first thought? 

The news turned out to be bigger, in some ways. More large tranches of credentials continued to be uncovered in the days that followed. These new collections of credentials bring the total to 2.2 billion records of personal data made public. Even if the vast amount of these records is old, and by all estimates they probably are, this massive collection of information substantially increases the risk of phishing attacks that will target these accounts after they’d been pushed above ground.

“ According to the State of the Phish Report, since 2017 credential-based compromises increased 70 and 280 percent since 2016. ”

Phishing remains one of the most common and, unfortunately, successful attacks that target users – and it’s not just user endpoints that are in the sights of the bad guys. Often, phishers aim first at users as a way to get closer to something else they are seeking, perhaps information on corporate executives, business partners, or anything else they deem valuable. When an employee clicks on a link or opens a maliciously crafted attachment, his or her endpoint can then be compromised. That not only makes a user’s data at risk from compromise or destruction, such as through a ransomware attack, but attackers can also use that endpoint as a platform to dig deeper into other networks, accounts and cloud services. 

Consider ProofPoint’s most recent annual State of the Phish Report, which found that 83 percent of global information security respondents experienced phishing attacks in 2018. That’s up considerably from 76 percent in 2017. The good news is that about 60 percent saw an increase in employee detection after awareness training. According to the State of the Phish Report, since 2017 credential-based compromises increased 70 and 280 percent since 2016. 

Unfortunately, the report also found that data loss from phishing attacks has tripled since 2018. Tripled.

“ Someone is going to click something bad, and antimalware defenses will miss it. ”

This latest uncovering of credentials is a good reminder as to why organizations always have to have their defenses primed against phishing attacks. These defenses should be layered, such as to include security awareness training, antispam filtering, and endpoint and gateway antimalware, along with comprehensive data protection, backup and recovery capabilities for when needed, such as following a malware infection or successful ransomware attack. 

However, even with all of those controls in place, the reality is that some phishing attacks are going to be successful. Someone is going to click something bad, and antimalware defenses will miss it. The organization needs to be able to investigate successful phishing attacks. This includes investigating and understanding the location of IP addresses, gaining insights into the reputation of internet domains and IP addresses, and establishing workflows to properly manage the case. These investigations can help your organization protect itself by blocking malicious mail and traffic from those addresses, notify domain owners of bad activity, and even assist law enforcement. 

When you find a file that is suspected of being malware, you can then search across the organization for that file. Chances are that, if it was a malicious file in the phishing attack, it may have targeted many people in the organization. Nathan Hunstad details how, in his post Tips From the Trenches: Enhancing Phishing Response Investigations, our hunt file capability integrates with security orchestration, automation and response (SOAR) tools to rapidly identify suspicious files across the organization and swiftly reduce risk. 

There’s another lesson to be learned here, one that is a good reminder for your organization and your staff: We are all on the dark web, where much of its information is about us – all of the information that has been hacked over the years; such as financial information, Social Security numbers, credit reports, background checks, medical information, employment files, and, of course, emails and logon credentials, is likely to be found there. 

That’s why, while much of the information in this trove of credential information that has surfaced from the depths of the web turned out to be old information, it doesn’t mean there aren’t lessons here that need reminding. For instance, it is critical to assume the increased risks as a result of all of the information that is out there and how it can be used in phishing attacks.

Code42-Time-to-Bring-Shadow-IT-Into-the-Light

It’s Time to Bring Shadow IT Into the Light

Mention shadow IT to most enterprise IT and security professionals, and you are likely to elicit a frown. It’s understandable. At its worse, shadow IT, such as an unsanctioned server or cloud storage service, operated (shall we say, less than ideally) by business managers, can place systems and data at serious risk.

However, there’s another side to shadow IT. Shadow IT allows staff to choose their cloud apps and services, which helps improve productivity and drive innovation. Not to mention increase employee happiness. 

Still, shadow IT can and does pose significant risks to the organization, such as with the poorly managed server we mentioned. When users decide what cloud services they’re going to use themselves or how to collaborate with co-workers, IT loses visibility into these systems and data. Ultimately, what this means is enterprise data is scattered across multiple cloud services, and visibility into vitally important data is lost. Not good.

“ According to Gartner, shadow IT comprises roughly 40 percent of enterprise technology purchases. That is, business leaders decide, manage, and control nearly 40 percent of technology purchases. ”

After all, if IT doesn’t know a technology is in place, then it’s impossible to secure it or the data it holds. And it’s impossible to know who is accessing that data and why. 

Regardless, shadow IT is a permanent part of the enterprise landscape and IT and security teams need to adapt. According to Gartner, shadow IT comprises roughly 40 percent of enterprise technology purchases. That is, business leaders decide, manage, and control nearly 40 percent of technology purchases.

That much technology and the data it holds can’t remain to lurk in the shadows. 

We know why business users are so quick to embrace shadow IT. It can often take weeks or months for IT departments to deploy new servers or applications. But with only a credit card, business users can access cloud applications and services within minutes. 

The question becomes, how do IT teams harness that innovation from their staff, while also ensuring their data is adequately secured and protected?

They need to bring it out of the shadows. 

The first step is to assess what shadow applications and cloud services are in place so that there is an accurate baseline of the cloud applications and services in use.

There are a number of ways to achieve this, and the best method depends on the nature and size of your organization. You could start with a simple survey of the business groups to collect information on the applications they are using. Or you could begin by monitoring traffic and endpoints to see what applications are in use and where data is traveling. 

However you establish your baseline, the important thing is to get started. 

“ Now that you’ve identified shadow IT, whether it be cloud apps, storage or platforms, the goal shouldn’t be to reprimand or shut down these services. It should be to ensure the services that the staff has chosen are correctly managed and secured. ”

Now that you’ve identified shadow IT, whether it be cloud apps, storage or platforms, the goal shouldn’t be to reprimand or shut down these services. It should be to ensure the services that the staff has chosen are correctly managed and secured so that IT and security teams have adequate data visibility. That is, they can see what data is flowing to these services and ensure access to that data is controlled, and that the data is protected and recoverable. 

This way, when that poorly managed server is uncovered, it can be an opportunity for an educational moment. Staff can be made aware (or reminded) of how vital patching and systems updates and properly monitoring systems and data are to the security of the organization. And rather than taking the server down, IT can then monitor and properly manage it. The same is true for all cloud services and applications. Rather than trying to ban them all, manage them. 

One way to manage them is to use a solution like Code42 Next-Gen Data Loss Protection. It was built to collect information about every version of every file, giving businesses full visibility to where data lives and moves — from endpoints to the cloud. With that kind of oversight, security teams can monitor, investigate, preserve and ultimately recover their valuable IP without having to block data use or rely on the restrictive policies that are part of traditional data loss prevention (DLP). Instead of security teams working with limited visibility to a subset of files (when they need to gauge the risk of all their data) or hindering employee productivity, next-gen DLP helps them foster open, collaborative work environments.  

When shadow IT is managed in this way, the organization derives some distinct advantages. IT and security teams become better business enablers and support the needs of staff and business users. They become a trusted advisor and facilitator that helps the organization go forward securely.

Code42-Dont-Let-Your-Security-Be-Blinded-by-Cloud-Complexity

Don’t Let Your Security Be Blinded by Cloud Complexity

It’s incredible how complex today’s IT environments have become. Among the central promises of cloud computing were simplified management and security. However, almost paradoxically, it is the ease of cloud deployment and use that led to an explosion of adoption that has presented a significant challenge for security teams.

The challenge isn’t necessarily just the number of cloud services in use but how scattered an organization’s data becomes across these services. It doesn’t seem too long ago when nearly all enterprise data was stored on local drives or shared storage in a data center. No more. With the rise in popularity of cloud services, files are likely to be stored on user endpoints as well as across a number of cloud services, including Box, Google Drive, OneDrive or collaboration platforms like Slack and others.

“ Unfortunately, the rise in IT management complexity will continue to make for rising security challenges. ”

To add to the complexity, the research firm Gartner estimates that more than 80 percent of enterprise data is unstructured data, and most of that data is expected to be stored in cloud systems.

And, while this may be surprising — because it feels like cloud adoption has been ongoing for some time now — the reality is that the move to the cloud is still in its early stages. According to the market research firm Stratistics MRC, the global cloud storage market is expected to grow from its $19 billion market size in 2015 to more than $113 billion by 2022. That’s an annual growth rate of roughly 29 percent.

All of this compromises the ability of security teams to peer into the movement and location of the organization’s sensitive data. Security teams simply cannot monitor organizational data for changes or see where it travels. Security investigations become harrowing and require complex workflows with multiple tools to attempt to analyze potential security events — and forget about knowing for certain whether specific data files are backed up and recoverable.

These are questions security teams need to be able to answer — not only for security and regulatory compliance demands but to also ensure data availability for business.

Unfortunately, the rise in IT management complexity will continue to make for rising security challenges. And, let’s be honest, security technologies have not always made the jobs for security professionals easier.

Consider how difficult most security tools are to set up and manage. This is unfortunately the case when it comes to most prevailing security technologies: web application firewalls, intrusion detection and prevention systems, encryption and so on. The same is true for traditional enterprise DLP.

The more complex the environment, the more challenging security becomes, and the more seamless to the workflow enterprise security managers must be.

This is why we made Code42 Next-Gen DLP straightforward to connect to cloud services and easy to use. Rather than being blinded by complexity, security teams can see where files are moving to and quickly scrutinize if something needs to be investigated. It provides a comprehensive view of file activity across both endpoints and cloud services.

Code42 Next-Gen DLP is designed to simplify investigatory workflows, shorten incident response time and help to reduce security and compliance risks.

In order to effectively manage cloud complexity, security teams need to be able to simplify their workflows — and do so regardless of the cloud services employees choose to use. After all, our IT environments aren’t going to get any easier to manage any time soon. We are creating more files, which are being stored in more cloud services, than ever before — and security threats and regulatory demands aren’t going to go away either. Your best defense is to ensure you have the necessary visibility to manage and secure your user data no matter where that data is being used and stored.

Code42-Tis-the-Season-the-Greedy-Go-Phishing

‘Tis the Season the Greedy Go Phishing

It’s the time of year where we (hopefully) spend a little more time away from work and more time with friends and family to relax and celebrate. It’s to be expected that many of us are a bit more relaxed during the holiday season. Perhaps off-guard. This is exactly where the bad guys want us. They’re counting on it. It’s why they are more active this time of year.

The holidays have always been a time for the greedy to strike. Years ago, their primary vectors of attack were telemarketing scams used to promote fake charities. Of course, criminals still do these types of scams, but they have also kept up with the technological trends of the times. Today you are just as likely — if not more — to be hit with a phishing email, instant message or scam on social media.

“ As staff use corporate devices for both work and shopping — and accessing data files as well as connecting to the network — there is an increased risk that clicking on the wrong file or link could expose your organization to malware, data theft, ransomware attacks and more. ”

But Rob, this is a corporate security blog — why are you writing about consumer security? Well, here’s the thing: the scam and phishing-related activity doesn’t just place consumers at risk. After all, your corporate employees are consumers — and think about how the separation between people as consumers and workers has been erased. The days of employees having personal devices and work devices are long gone. Many organizations are BYOD now, either by policy or the reality on the ground.

The reality is your employees are using work devices to click on emails, shop and research the holiday gifts they hope to share. As staff use these devices for both work and shopping — and accessing data files as well as connecting to the network — there is an increased risk that clicking on the wrong file or link could expose your organization to malware, data theft, ransomware attacks and more.

Here are just some of the techniques attackers use to trick employees:

  • Emails that look like they come from insiders of the organization or trusted partners
  • Bogus websites that promise deep discounts, but are really designed to siphon personal data and credit card numbers
  • Mass phishing scams that impersonate popular retail brands (that steal usernames and passwords that thieves will try to use elsewhere)
  • Spurious order or shipment update emails
  • Phony charities
  • Social media updates and tweets crafted to trick people to scam websites
  • Holiday ecards (isn’t anything sacred?)

The good news is because attackers are using the holidays as a moment of opportunity, you can do the same thing by taking constructive steps to build employee awareness about phishing and online scammers. To protect their safety and yours, now is a perfect time to help them to understand that they are being targeted during the holiday season.

Here are some things to remind employees to do to protect themselves and your organization:

  • Avoid public Wi-Fi and always be sure to connect to secure internet.
  • Always use best practices when it comes to password management.
  • Use unique passwords for each service and never reuse work passwords for home.
  • Use a separate email for online shopping.
  • Dedicate one credit card or prepaid card for online shopping, and don’t use debit cards (the rules for fraud protection are often different).
  • Be vigilant for phishing emails, social media posts and direct messages. Don’t ever click on unfamiliar links; when an offer seems too good to be true, it probably is.
  • Look closely at all email communications — watch for minor changes in email address name or domain, the validity of the domain the links refer to, typos in the text of the message and odd grammar.
  • Remind them to back up their devices and data; this is the best way to recover from such things as ransomware attacks.

Of course, much of the same advice holds all year around, but it’s worth being extra diligent this time of year. The less time spent cleaning up malware and recovering from attacks, the more time we all have to enjoy the season.

Cybersecurity That Users Are Thankful For

When do you most value your applications or ability to access your data? That would be the very second after something goes awry and your access is lost. It’s true, and it’s like the cliché: you don’t know what you have until its gone.

In this way, computing is a lot like a utility service: we just expect to flip a switch and have the lights go on. We plan to dial a number and have the phone system work. Moreover, we don’t tend to think about how much we appreciate these technologies until the moment they don’t work as we expect. If you don’t believe me, talk to people diligently working on your IT support team right now. Ask them how often they get calls when everything is working right from staff, thanking them for ensuring access to their business-technology systems has remained available and smooth. 

Then ask them how often the phone rings when something goes down.

Exactly.

Cybersecurity is very similar. No one thinks about the technologies protecting them until they fail, and there’s a breach or systems become inaccessible. How security professionals help others manage risk can also create challenges.

“ While some rules are necessary, security technology that is focused on prevention only can position security teams as blockers and deniers. ”

What I mean by this is that often, when staff hears from their security teams, it’s because something went wrong. The user did something wrong, or the security team is going to inform staff that they can’t continue doing things a certain way: Don’t access public Wi-Fi without a VPN. Stop using this password. Hurry up and patch and reboot all of these systems. No, you can’t use that cloud service; you have to use this cloud service instead.

While some rules are necessary, security technology that is focused on prevention only can position security teams as blockers and deniers. There are, however, other ways security teams can serve as business partners and architect solutions that not only secure data but also make it easier for users to get their work done. At Code42, we are always looking for ways to provide added value directly to the user.

Here’s an example. As part of the Code42 Next-Gen Data Loss Protection solution, we also provide users the ability to back up and secure their data. Data loss protection with that extra level of recoverability gives the user additional peace of mind. They know that if their notebook dies, or someone clicks on a malicious link, that they don’t have to panic. There’d be no reason to. They’ll see something went wrong, but they’ll know their data is backed up and safe and can be recovered.

Recently, I had the opportunity to watch this play out with a customer. They wanted to make a security purchase, but they were low on budget at the time. They thought they had to postpone their purchase. However, when the IT team found out that they would get data leak protection and the ability to consolidate their endpoint backup solution, they decided to move forward.

They ended up going forward with the investment because they realized that this was a win for the IT team, the security team and the end user.

My takeaway from this experience is also a good lesson for security professionals: don’t over-focus on prevention technology that is narrowly focused on denying and blocking. Look for solutions that enable end users and IT to be not only more secure but also more collaborative and productive. And that’s something everyone would be thankful for.