Why is everyone so clueless about enterprise data storage?
Storage Management

Why is everyone so clueless about enterprise data storage?

Alex Wallner, senior VP and GM EMEA at NetApp, said recently that most of the company's customers have no idea what their storage requirements are going to be in three years' time.

As Niels Bohr once commented, prediction is very difficult, especially about the future. Even so, this seems an astounding state of affairs. Three-year predictions of business requirements are hardly unheard-of. Sure, they all come with caveats, but "no idea" – really?

In reality, it's harder than one might suppose to make predictions about storage requirements even a year ahead. Let's start with the basics first. Assume your organization will remain at its current size in headcount and revenue terms, or even grow slightly, in line with previous years (or shrink, but in that scenario you'd have more pressing issues than storage space). Unfortunately this has little to no bearing on the company's future storage requirements. Business processes are changing fast, and as they become more digital they generate more data, both internally and from customer interactions.

The phrase “big data” sounds quaintly old-fashioned now that we're in an era in which everything data-related is big by definition. Data storage requirements haven't followed a linear progression over the past three years – chances are your own firm's storage usage graph already looks asymptotic – so there's little reason to believe that they will in the next three. But just how much bigger will those requirements become? Hard to say, but there are plenty of reasons to think that the answer will be “A lot”.

Almost every business event generates data. Automation, for example, whether it replaces human employees or accompanies and augments them, generates swathes of it. Processes must be analyzed, performance tracked, every fine detail measured in order to ensure that automation does actually improve efficiency. All of this data must be stored.

Then consider IoT sensor data: even modest IoT networks are capable of generating high volumes of bits. With edge computing, a lot of that won't need to be stored centrally, at least in raw form. But the processed numbers will have to be kept for analysis and to streamline and improve the way they work.

CIOs are now used to cloud computing, but here comes edge computing. Should they worry? Check out: Edge computing 101: A CIO demystification guide

In fact there are legal arguments to be made that all raw data should be kept. However impractical that may sound, courts tend not to care too much about inconvenience to plaintiffs. Imagine the first major court case against a driverless car manufacturer in which the defendant was unable to produce the necessary sensor data leading up to an accident, for independent experts to analyze. “We discarded it” would not look good for the defense. This also applies in less dramatic, more mundane circumstances. A robot arm that accidentally injured an assembly-line worker would may require a full data audit trail. No data? That could be costly.

Then there's the vast array of analytics tools increasingly being pressed into service to tell you more than ever about your past, present and future customers. Marketing feedback, PoS data, online traffic and footprints, social media engagement statistics, CRM data and so much more. Some of this data may be handled off-site by service suppliers. Still more of it might be shunted to various cloud platforms. Yet out of sight isn't out of mind. It still has a cost and must be budgeted and accounted for.

To make matters even more interesting, storage costs are not immutable. Although Moore's Law still (sort of) applies to compute requirements, storage options aren't getting bigger/cheaper at the same rate. Even though there is a downward price-per-GB trend, blips upward can spoil the graph. For example, Meltdown/Spectre mainly affects data transfer operations, slowing them down considerably. If, as seems likely, cloud providers increase their prices to compensate for this, you'll be paying more for the same amount of I/O, or paying the same amount but getting lower performance.

If you only store data, that's not likely to be such a major issue. But chances are that since you collected the data in the first place, you're going to want to do something with it. To paraphrase Confucius, data that is not used might as well not exist. So you will require compute and I/O, which is where CPU flaw mitigation patches hit hardest.

All of this means that if you're in operations and scratching your head while trying to come up with storage projection figures to present to the board, don't feel alone. Just include a link to this article in your presentation. And if you're a c-level exec on the receiving end of such a presentation, spare a thought for your poor operations team – this is one of those rare business scenarios in which “I have no idea” might be the only correct answer.

PREVIOUS ARTICLE

«News Roundup: Could the US have a nationalized 5G network?

NEXT ARTICLE

Where are we at with containers?»
Alex Cruickshank

Alex Cruickshank has been writing about technology and business since 1994. He has lived in various far-flung places around the world and is now based in Berlin.  

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?