How health data strategy should drive storage decisions

How much flash storage is enough? The answer, says one expert, depends on what you’re planning to do with the data.

Jeff Rowe | Oct 18, 2017 12:00 am

The concept of moving inactive data from an expensive tier of storage to a less expensive tier has been around for decades.

But as George Crump, president of Storage Switzerland, an IT analyst firm, wrote recently at TechTarget, the problem has been that “most data movement options were terrible, and IT professionals came to the conclusion that managing their data wasn't worth the effort. But things changed in recent years, and moving data between tiers of storage is now easier to implement and manage.”

A big part of that change, naturally, has been the introduction of flash storage, and Crump spends the bulk of his article distinguishing between the hype and reality of flash storage versus other storage options, as well as guiding stakeholders through the array of considerations they need to weigh as they determine how best to manage their data.

As he explains the evolving challenge, “the first step making the movement of data easier was the introduction of hybrid storage systems. These storage systems move data within themselves, typically from a small flash tier to a large hard disk tier.”

The challenge for hybrid systems, he says, is they need to be capable of delivering all-flash performance and hard disk performance. “Since the internals -- compute, memory, networking -- of a hybrid system must sustain the performance capabilities of flash, the price of those internals will be higher than if it were a standard array that only needed to sustain the performance capabilities of hard disk drives.”

And then there’s the challenge of inactive data. “A storage system solely focused on that data type enjoys significant cost-per-gigabyte advantage over hybrid arrays,” Crump says, “but it introduces a separate system. Data centers need to move data between these different types of storage systems on a policy-driven basis.” 

In the end, Crump says, “most data centers should only consider all-flash arrays for active and near-active data, at most 20 percent of total capacity. The remaining 80 percent of data center storage should reside on capacity storage -- high-capacity NAS, object storage or public cloud storage.”