Why traditional backup solutions fall short

Storage discs.

Data was once something you more or less managed on your own. The reason was simple: Most storage media, from PC hard drives to optical discs and USB thumb drives, were local, and cloud computing as we now know it did not exist. In the absence of the scale, flexibility and cost-effectiveness of on-demand pooled IT resources (i.e. public and hybrid clouds), critical applications such as backup were as reliant on on-premises and/or nearby infrastructures as word processors once were on floppy disks.

Why traditional backup is still around – and why that's a risk for many businesses

However, unlike the storage formats of past decades, legacy backup systems have never really gone the way of the 3.5-inch floppy. The traditional methods of relying on external disks, magnetic tapes and mirrored offsite servers are still appealing to many organizations since they offer high levels of control and familiarity and require less transfer of information over networks.

At the same time, there are significant risks in sticking to the old standby of self-managed backup, running the gamut from the financial to the procedural:

1. High costs, especially for mirrored server setups

Doing backup the old-fashioned way means spending a considerable amount upfront on your equipment as well as the space to house it all. Do you have a budget that can:

  • Support regular major purchases of infrastructure?
  • Ensure acceptable connectivity between your sites?
  • Provide sufficient cooling for hardware?
  • Maintain well-paid technical staff?

While some firms can foot the bill for these amenities, the typical IT budget is not growing quickly enough to keep pace with rapidly evolving requirements for data management. The 2018 State of IT survey from Spiceworks revealed only 44 percent of respondents expected a year-over-year budgetary increase in 2018, while 54 foresaw either static or decreased annual allocations (the rest weren't sure). There were similar outlooks for IT staffing.

Moreover, the money IT departments are spending is increasingly channeled toward cloud and managed hosting services, instead of the hardware and software which are the bread and butter of standard backup systems. Spiceworks found the cloud/hosted category accounted for more than one-fifth (21 percent) of the typical budget, not far behind software (26 percent).

A larger share of respondents also planned to increase their spending here than in any other realm. This data point prompts the question: Why not shift backup to the cloud, alongside the many user-facing applications, development platforms and analytics tools that have already moved there? The flexible subscription models of Infrastructure as a Service are also available for Backup as a Service (BaaS), plus you don't have to manage your own data center since the cloud service provider oversees all the important facilities behind the scenes.

2. Complicated and error-prone workflows

Automation is a core principle of cloud computing services, setting them apart from most on-prem equivalents. The National Institute of Standard and Technology once defined cloud as "a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This is impossible without some degree of automation.

"Automation is a core principle of cloud computing services, setting them apart from most on-prem equivalents."

With sufficient automation, end users are much less likely to waste time with repetitive tasks that often support overly complex operations and cause preventable errors. Consider the case of a non-cloud offsite mirroring setup. Administrators would need to duplicate a lot of work between locations, including purchasing, upgrading and patching. Keeping track of all the change management requirements is another complication. Even a small slipup by an overworked technical team could create a major security vulnerability or degrade the performance of a critical connection for restoring a system backup.

Organizations might not think such a relatively simple oversight could affect them, but it happens to virtually everyone sooner or later. For example, Apple recently announced and rapidly closed a critical flaw within macOS High Sierra, pledging a thorough audit of its development processes due to the severity of the mistake. However, the rapidly issued patch itself broke file sharing and requires manual entry of a command in the Terminal app. For companies much smaller than Apple, managing their backups in-house with limited numbers of personnel, similar mishaps could mean unacceptable downtime and lost customers.

3. Inability to support data analytics or overall compliance

The emergence of data analytics – i.e., the manipulation of large datasets, in many cases by algorithms and automated utilities, for insight – has pushed legacy storage and data management practices to their limits. With so much information being generated every day – IBM estimated most data in history was created this decade – backup systems must scale in tandem with the wide array of applications and services organizations are deploying to extract value from their data.

Many big data projects begin as linear extensions of existing on-prem infrastructures, such as data warehouses. IT buys the assets it thinks it will need based on the current scope and recalibrates its strategy as requirements evolve. Unfortunately, this approach can quickly become impractical due to the time and effort needed to add more resources.

"The emergence of data analytics has pushed legacy storage and data management practices to their limits."

In other words, it is possible for a project to begin at terabyte scale and then suddenly morph into petabyte scale, with much deeper need for scalable storage. If the initial technical specifications of the backup systems become inadequate along the way, then resources will need to be added for scaling. Investing in cloud-based BaaS from the get-go is usually a sound choice because it spares you from sinking an enormous amount of money into traditional infrastructures and data centers, which might become outdated overnight once you require larger capacities. In cloud-based BaaS, procuring more resources is usually simple and cost-effective; most of the time, you are billed based on consumption and you also have the flexibility to adjust your services as needed.

4. Less capable technology and poorer overall performance

In light of the challenges we've outlined in maintaining a typical backup system, it is no surprise that many IT departments end up saddled with technology that is behind the curve. On top of that, they have to invest significant time performing security assessments and replacing broken or outdated equipment. Accordingly, performance will suffer if and when an actual event necessitates restoring to backup.

The cloud model of BaaS removes these complications en route to delivering more up-to-date services. With the cloud service provider offering 24/7 monitoring, ultra-secure data center facilities and constant updates for security and functionality, you don't have to worry about falling behind. This peace of mind also frees up more time to focus on activities that can add value to your business, instead of the less appealing work of tinkering with IT infrastructures.

Get started with BaaS from UbiStor

UbiStor is a proven provider of cloud services, including BaaS and data management. We help ensure your information is protected regardless of the environment in which it lives. Learn more by visiting our main BaaS page today and contacting us directly with any questions about the solution.