Code42 is a Key Player in Cloud Backup

The cloud backup market is expected to grow to a value of approximately $5.6 billion by 2023, and Code42 is a key player in the category, said market research firm Market Research Future. The new study provides detailed analysis of the cloud backup market structure, along with forecasts of the various sub-segments of the category: primary storage, disaster recovery, cloud storage gateway, and more.

Code42 is proud to be listed as a key player in this report, along with household name corporations such as Microsoft, Amazon, Oracle, Dropbox, and IBM. We couldn’t agree more with the report’s findings that cloud backup will only continue to grow in coming years. You only have to look at the headlines around cybersecurity events like WannaCry and NotPetya/GoldenEye to see why.

We’ve said it before and we’ll say it again: best-in-class endpoint backup solutions render ransomware attacks like WannaCry no more than a mere annoyance, thanks to self-service restores to user data that’s backed up every 15 minutes by default. But that’s far from the only advantage of a cloud backup solution. Here are a few more items to keep in mind when considering cloud backup:

Do more with fewer agents

Rather than continually add more agents to employee endpoints, enterprises are looking for applications that can fulfill more than one purpose, such as backup and data loss prevention. By 2020, Gartner says one in three organizations will use backup for use cases outside traditional operational recovery. We believe that a cloud backup solution should offer three key benefits:

1. Spot risk sooner

The leading endpoint backup solutions now leverage comprehensive endpoint data visibility with emerging analytics capabilities to fight increasing cyberattacks and the ransomware epidemic, as well as mitigate insider threats. With a clear baseline for normal data activity, organizations can spot the anomalies quickly and take action faster—because even the outliers follow patterns.

2. Move forward faster

Best-in-class endpoint backup solutions deliver faster data restores for faster recovery from traditional data loss incidents—from user error and hardware failure to cyberattack and ransomware attacks. But forward-thinking organizations are using endpoint backup to streamline device migration and large-scale tech refresh projects. Automated workflows, lightning-fast restores and simple self-restore functionality reduces IT burden, maintain end-user productivity and allow the organization to get back to business in minutes or hours, instead of days or weeks.

3. Always bounce back

Many emerging solutions have skipped over the fundamentals of enterprise data backup—or attempted to position an electronic file sync and share product as a backup solution. At the most basic level, backup should guarantee recovery and offer complete confidence in business continuity. True endpoint backup ensures that your organization can always bounce back—no matter what.

To view the research report, visit the report page on Market Research Future’s website.

Why Local Deduplication Is the Key to Faster Restores

Why Local Deduplication Is the Key to Faster Restores

Scan through the Code42 blog and you could sum things up by saying, “Back up your data, back up your data, back up your data.” It’s true that backing up all of your endpoint data is the critical and foundational step in a modern data security strategy (and something most companies still don’t get right). But data recovery is where the rubber meets the road. Faster restores mean you get your files back sooner, minimize downtime and get back to business faster.

What if we told you there was one simple way to make your restores up to nine times faster—and that many enterprise backup solutions still choose NOT to use this approach?

The old way: minimize storage with global data deduplication

Most businesses are accustomed to worrying about minimizing data storage to control costs. Global data deduplication was designed to address this concern, creating one giant data store containing a single copy of each unique block of data across all users in the enterprise. Restores with global dedupe can be painfully slow—it takes a long time to scan one enormous data store to locate all the unique pieces of data needed for a given user’s device restore. But hey, slow restores are worth it because you’re paying a lot less for storage, right?

Wrong. Today the cloud makes flexible data storage incredibly affordable. That’s why Code42 offers truly unlimited backup storage for our customers. So if your backup provider is still touting the benefits of global dedupe, it might be time to ask, “Benefits to whom?”

The better way: maximize restore speeds with local deduplication

Freed from the need to minimize the overall size of data backups, businesses can now take full advantage of local data deduplication to maximize restore speeds. Local dedupe creates user-specific data stores, making it quick and easy for the backup application to locate a user’s files and data in a restore scenario. Just how much faster? One independent study showed that Code42’s restores using local deduplication were five to nine times faster than restores using global deduplication. That’s no small advantage. And with the cost of downtime and lost productivity rising, every minute truly counts.

Want to get your data back faster? Read our latest white paper “Get Your Data Back Faster: Why Enterprises Should Choose Local Deduplication for Endpoint Data Backup” to see how local data deduplication and unlimited data storage give you powerful business continuity advantages.

451 Research: Code42 Is Well-Positioned for the Data Security Space

Code42’s proven endpoint backup platform puts us in an ideal place to solve some of today’s most complex data problems, especially those related to security. Customers are starting to realize the true potential of endpoint data, and demanding more visibility to understand data movements in and out of the organization. With the launch of 6.0, Code42 took a major leap into the data security space, and the update is the subject of a new Market Insight report from 451 Research.

The 451 Research highlights that vendors in the backup market have been gravitating towards additional security features to address the latest strains of cyberthreats, such as ransomware. Code42 receives praise for taking on ransomware, but also a more common and potentially more damaging danger – insider threat. “Addressing the threat of ransomware in particular has been a recurring theme among many vendors in this space, but the company is tackling internal threats with equal zeal as external ones,” states the report.

The Market Insight report also covers additional 6.0 features such as Access Lock and Okta integration. Looking at the totality of the new version, 451 Research states that “vendors such as Code42 are in a good position to deliver advanced data management and data loss-prevention capabilities since they see every file and the changes made to them during the backup process.”

To learn more about 451 Research’s take on Code42’s 6.0 launch, read the report today.

Simple Is Better—and Policy-Dependent Backup Isn’t Simple

In the 1300s, the principle known as Occam’s razor was established, holding that the best solution is the simplest one. About 600 years later, network drive policy was born. It sounds reasonable enough: You tell your employees to back up their files to the network drive. They do it. Then you back up that drive. Voila! Your files are all protected and backed up. Right?

Wrong. The problem is that users are also following this principle of simplicity. They’re looking for the path of least resistance—and network drive policies only add burdens to their daily workflows. So they don’t back up. Or they forget to back up. Or they come up with their own (unreliable) means of backup.

Policy-dependent backup leaves a widening gap in data security

As much as two-thirds of a company’s data now lives exclusively on endpoint—where digital productivity takes place. In the typical enterprise, 35 percent of endpoints haven’t been recently backed up. Do the math and you see the big problem: About one-quarter of a company’s data is not protected—unrecoverable if disaster strikes, invisible to IT and highly vulnerable to hacks and data theft.

The simplest solution is backing up data right at the source: the endpoint

William of Ockham, credited creator of Occam’s razor, was right—a simpler solution is a better solution. If you or your organization’s leadership still believe policy-dependent backup can protect your business, it’s time to read our new white paper, Debunking the Myth of Policy-Dependent Backup. See why network drive policies impede productivity, burden IT and leave dangerous data security holes—and understand why endpoint backup is the simple solution you and your users have been looking for.

Debunking the Myth of Policy Dependent Backup

Kick Off Backup Awareness Month by Going Beyond Backup

BAM! That’s the sound of Code42 starting Backup Awareness Month (BAM). We’re posing tough questions to all our readers to help you ensure your data collection and protection strategy is truly up to snuff:

Do you have endpoint backup—and do you know it works? Our CTRL-Z study found that half of all corporate data now lives on endpoint devices, and that 42 percent of IT and business decision makers say losing all their endpoint data would be business-destroying. But even though more value than ever lives on endpoint devices, one in five organizations still lack an endpoint backup solution. Another 20 percent say they have endpoint backup, but they haven’t tested it. That could be why only a third of our study respondents said they have faith in their organization’s ability to get back up and running quickly after a data loss incident.

Can you spot risks—from inside and out—quickly? Hacks make headlines every day. But the more insidious attacks come from inside the organization. The increasing threat posed by insiders can’t be solved with traditional perimeter-based security, leaving security teams scrambling to figure out how to mitigate this growing risk.

Later this month, we’ll dive into the details of what it takes to build a robust insider threat program to help you spot risky employee behavior sooner, before it sinks your business.

Can you move forward faster? No matter the cause, data loss brings business to a halt. But it’s not just the malicious incidents we mentioned above. It’s everyday occurrences like device migrations and tech refreshes.

Stay tuned to see how forward-thinking organizations are leveraging endpoint backup to make device migrations easier on everyone while also mitigating the risk of data loss.

Can you bounce back—no matter what? We’ve all probably wished we had a CTRL-Z in other aspects of our lives, but when it comes to data loss incidents, you can. Endpoint backup gives you the ability to undo user errors, restore devices in minutes, and even go back to the moment before a cyberattack.

Check back throughout the month to learn more about how to spot risk sooner, move forward faster, and always bounce back. Or, dive in right now with the Five Reasons Why You Need Endpoint Backup white paper!

File Server or Endpoint Backup? 10 Considerations

Today’s workplace looks very different than just five years ago. The average knowledge worker uses three endpoint devices, which move fluidly from the office to home and countless locations around the globe. Every day, an average worker creates an immense amount of data, and the amount they create is constantly growing. With today’s modern workforce, much of that data lives outside your company’s network, completely absent from the central file server. That means your end users are walking around with a store of valuable data every day—and that data needs to be backed up.

Many organizations still rely on traditional file server backup to protect their data, but that method has many shortcomings compared to the elegant simplicity of automatic endpoint backup. Network drive backup is a cumbersome, manual process, it requires users to follow procedure (which they won’t, if it impedes their productivity), and it otherwise pales in comparison compared to endpoint backup. Check out the chart below for 10 quick considerations about file server backup versus endpoint backup.

File Server or Endpoint Backup

For more on this topic, be sure to read the white paper “Debunking the Myth of Policy-Dependent Backup” to learn why network drive policy is no substitute for a true endpoint backup solution.

What They’re Not Telling You About Global Deduplication

When it comes to endpoint backup, is global deduplication a valuable differentiator?

Not if data security and recovery are your primary objectives.

Backup vendors that promote global deduplication say it minimizes the amount of data that must be stored and provides faster upload speeds. What they don’t say is how data security and recovery are sacrificed to achieve these “benefits.”

Here’s a key difference: with local deduplication, data redundancy is evaluated and removed on the endpoint before data is backed up. Files are stored in the cloud by the user and are easily located and restored to any device. With global deduplication, all data is sent to the cloud, but only one instance of a data block is stored.

They tell you: “You’ll store less data!”

It’s true that global deduplication reduces the number of files in your data store, but that’s not always a good thing. At first blush, storing less data sounds like a benefit, especially if you’re paying for endpoint backup based on data volume. But other than potential cost savings, how does storing less data actually benefit your organization?

Not as much as you think.

For most organizations, the bulk of the files removed by the global deduplication process will be unstructured data such as documents, spreadsheets and presentations—files that are not typically big to begin with—making storage savings resulting from global dedupe minimal. The files that gobble up the bulk of your data storage are those that are unlikely to be floating around in duplicate—such as databases, video and design source files, etc.

What they don’t tell you: Storing less data doesn’t actually benefit your organization. Smaller data stores benefit the solution provider. Why? Data storage costs money and endpoint backup providers pay for huge amounts of data storage and bandwidth every month. By limiting the data stored to one copy of each unique file, the solution provider can get away with storing less data for all of its customers, resulting in smaller procurement costs each month—for them.

Vendors that offer global dedupe also fail to mention that it puts an organization at risk of losing data because (essentially) all the eggs are in one basket. When one file or data block is used by many users but saved just once, (e.g., the HR handbook for a global enterprise, sales pitch decks or customer contact lists) all users will experience the same file loss or corruption if the single instance of the file is corrupted in the cloud.

They tell you: “It uploads data faster.”

First, let’s define “faster.” The question is, faster than what? Admittedly, there’s a marginal difference in upload speeds between global and local deduplication, but it’s a lot like comparing a Ferrari and a Maserati. If a Ferrari tops out at 217 miles per hour and a Maserati tops out at 185 miles per hour, clearly the Ferrari wins. It’s technically faster, but considering that the maximum legal speed on most freeways is 70-75 miles per hour, the additional speed on both vehicles is a moot point. Both cars are wickedly fast but a person is not likely to get to drive either at its top speed, so what does matter? The fact is, it doesn’t.

The same can be said about the speed “gains” achieved by utilizing global deduplication over local deduplication. Quality endpoint backup solutions will provide fast data uploads regardless of whether they use global deduplication or local deduplication. There’s a good chance that there will be no detectable difference in speed between the two methods because upload speed is limited by bandwidth. And even if you could detect a difference in upload speed, what does that matter? What’s important is the speed of recovery, and on that matter, local deduplication is the clear winner.

What they don’t tell you: Global deduplication comes at a cost: restore speeds will be orders of magnitude slower than restoration of data that has been locally deduplicated. Here’s why: with global deduplication, all of your data is stored in one place and only one copy of a unique file is stored in the cloud regardless of how many people save a copy. Rather than store multiples of the same file, endpoint backup that utilizes global deduplication maps each user to the single stored instance. As the data store grows in size, it becomes harder for the backup solution to quickly locate and restore a file mapped to a user in the giant data set.

Imagine that the data store is like a library. Mapping is like the Dewey Decimal System, only the mapped books are stored as giant book piles rather than by topic or author. When the library is small, it’s relatively easy to scan the book spines for the Dewey Decimal numbers. However, as the library collection (that is, book piles) gets larger, finding a single book becomes more time consuming and resource intensive.

Data storage under the global deduplication framework is like the library example above. Unique files or data blocks are indexed as they come into the data store and are not grouped by user. When the data store is small, it’s relatively easy for the system to locate all of the data blocks mapped to one user when a restore is necessary. As the data store grows in size, the process of locating all of the data blocks takes longer. This slows down the restore process and forces the end user to wait at the most critical point in the process—when he or she needs to get files back in order to continue working. With local deduplication, the system goes straight to the unique data store and performs a restore from the most recent clean backup. As a result, local deduplication restores are five to nine times faster than with global deduplication.

The real security story: What you’re not being told about global deduplication doesn’t stop there. Two-factor encryption doesn’t mean what you think it does. Frankly, an encryption key coupled with an administrator password is NOT two-factor encryption. It’s not even two-factor authentication. It’s simply a password layered over a regular encryption key. Should someone with the encryption key compromise the password, he or she will have immediate access to all of your data.

Conclusion

Companies that deploy endpoint backup clearly care about the security of their data. They count on endpoint backup to reliably restore their data after a loss or breach. Given the vulnerabilities exposed by the global deduplication model, it is counterintuitive to sacrifice security and reliability in a backup model in favor of “benefits” that profit the seller or cannot be experienced by the buyer.

Why Higher Education is Now the Top Ransomware Target

As enterprise ransomware continues to accelerate—now striking a business every 40 seconds—it’s also found a new favorite target: educational institutions. A new report from BitSight shows education is now the most targeted industry for ransomware, and headlines back up the stats, with recent attacks on colleges, universities and entire public school districts. One reason hackers are putting schools in their crosshairs: decentralized IT across departments increase the odds of a successful attack.

Decentralized IT—common in higher ed—creates security holes

While collaboration and knowledge-sharing may be top priorities on campus, most departments still operate relatively autonomously—especially when it comes to technology. Operationally, it makes a lot of sense for individual departments to build and support their own IT infrastructure. The astrophysics department has much different technology requirements than the literature department, for example. From a security perspective, however, this lack of standardization and central control increases the likelihood of holes or vulnerabilities. Across a dozen (or dozens of) departments, there’s a good chance at least one has some combination of outdated devices and unpatched OS, inadequate email filtering and AV, faulty data backup or insufficient user training and policy.

Cybercriminals bet on higher-ed IT holes

For cybercriminals playing the odds with exploit kits or phishing scams, the logic is simple: a wider range of IT means a better chance of finding a hole. For comparison, look at how a ransomware attack against a corporation compares to an attack on a university:

Code42 education options

See how higher education can prepare for ransomware

Download the Code42 slideshare, “The new threat on campus: Ransomware locks down education,” to see the other common vulnerabilities and learn how to build a ransomware defense and recovery strategy.

The New Threat on Campus: Ransomware Locks Down Education from Code42

In Healthcare, Ransomware Actually Threatens Patient Safety

Imagine needing medical care and being turned away because the hospital or provider is paralyzed by a ransomware attack. Perhaps even scarier: needing emergency care and being treated “blind” by doctors who can’t access your medical records. This isn’t some far-off worst-case situation. Just last March, MedStar Health, the largest healthcare provider in the D.C. region, was forced to turn patients away and treat others “blind” for two full days after ransomware locked down its patient database.

Legislators urge HHS to focus on continuous data access

Nightmare scenarios like this are getting the attention of regulators and legislators. In June, two U.S. congressmen released a letter urging HHS to amend HIPAA rules to prioritize continuity of data access. In particular, they called for a focus on any incident that “results in either a denial of access to an electronic medical record and/or loss of functionality necessary to provide medical services.” The loss of data access is more concerning than a privacy breach, explained Congressman Ted Lieu, because “it could result in medical complications and deaths if hospitals can’t access patient information.”

It makes sense, doesn’t it? Patients (and the general public) have a right to know about incidents like this. After all, you might not choose the hospital that can’t promise continuous care.

Is healthcare too focused on data privacy?

HHS did recently issue specific guidance on ransomware and HIPAA compliance. But the guidance stays within the realm of original HIPAA rules, focusing entirely on data privacy concerns. The result, according to a new report titled “Hacking Hospitals” is that the typical healthcare organization has built its security infrastructure and strategy with tunnel vision on patient data privacy and HIPAA compliance. The report cautions that a singular focus on data privacy leaves an organization unprepared and vulnerable to a range of other cyber attacks that may pose an equal or greater risk. In the case of ransomware, the risk arguably supersedes patient privacy concerns, impeding the organization’s ability to actually deliver patient care. “These findings illustrate our greatest fear,” the report warns, “patient health remains extremely vulnerable.” The report concludes that the focus on data privacy, “while important, should come second to protecting patient health.”

Importance of data access elevates disaster planning and recovery

The shift toward focusing on continuous data access isn’t unique to healthcare. Regulators in every industry are realizing that an interruption to data access—such as ransomware attack—may have a graver impact than a traditional data breach. Businesses themselves are also seeing the threat of huge monetary losses from an interruption in service delivery. Looking back to healthcare, the ransomware attack on Hollywood Presbyterian Medical Center made headlines for the $17,000 ransom payment, but the cost of system downtime was far higher, with an estimated $1 million in lost revenue from lost CT scans alone.

This realization is putting disaster planning and recovery on the same level as detection and prevention in a modern data security strategy—and putting data backup squarely in the spotlight. The legislators pushing for HIPAA changes already acknowledge that effective backup can eliminate data access interruptions and mitigate the risk to patient health. Future regulations in healthcare and other industries will likely include specifications for comprehensive data backup—covering central servers and systems, as well as the half of all enterprise data that now lives on users’ endpoint devices.

Considering the high risk and cost, we don’t advise waiting around until regulators force the issue.

Evolving threats compel an about-face in data protection strategy

It’s time to flip our thinking about enterprise information security. For a long time, the starting point of our tech stacks has been the network. We employ a whole series of solutions on servers and networks—from monitoring and alerts to policies and procedures—to prevent a network breach. We then install some antivirus and malware detection tools on laptops and devices to catch anything that might infect the network through endpoints.

But this approach isn’t working. The bad guys are still getting in. We like to think we can just keep building a bigger wall, but motivated cybercriminals and insiders keep figuring out ways to jump over it or tunnel underneath it. How? By targeting users, not the network. Today, one-third of data compromises are caused by insiders, either maliciously and unwittingly.

Just because we have antivirus software or malware detection on our users’ devices doesn’t mean we’re protected. Those tools are only effective about 60% to 70% of the time at best. And with the increasing prevalence of BYOD, we can’t control everything on an employee’s device.

Even when we do control enterprise-issued devices, our security tools can’t prevent a laptop from being stolen. Or keep an employee from downloading client data onto a USB drive. Or stop a high-level employee from emailing sensitive data to a spear phisher posing as a co-worker.

We need to change our thinking. We need to admit that breaches are inevitable and be prepared to quickly recover and remediate. That means starting at the outside, with our increasingly vulnerable endpoints.

With a good endpoint backup system in place, one that’s backing up data in real time, you gain a window into all your data. You can see exactly where an attack started and what path it took. You can see what an employee who just gave his two weeks’ notice is doing with data. You can see if a stolen laptop has any sensitive data on it, so you know if it’s reportable or not.

By starting with endpoints, you eliminate blind spots. And isn’t that the ultimate goal of enterprise infosec?

To learn more about the starting point in the modern security stack watch the on-demand webinar.

Facebook Twitter Google YouTube