Servers in Data Center

As data storage has continued to rapidly evolve in the last few years, the whole concept of data storage is being reimagined.

Everything from the storage medium – Microsoft just days ago made their hopes to use DNA-based storage in their data centers by 2020, for instance – to the technology used to manage the storage systems is being improved on and replaced with new ideas every year.

One of the flag bearers for the advancing storage industry is the cloud; once an insecure and unreliable platform feared by many in the technical realm, the cloud is now dominating the storage market and is being used extensively even by the most cautious businesses.

Case in point: Enterprise Resource Planning (ERP) is seen as one of the last software solutions not already migrated to the cloud by organizations, thanks primarily to their need to integrate with every aspect of a company’s financial and business processes, along with their business-critical nature. SAP, as announced on May 22, has decided to focus on putting ERP on cloud software in the coming years, a clear indicator of just how far cloud services have come in recent times.

Perhaps the biggest change in data storage in recent years is with the blurring of the lines between data storage and data processing platforms.

With software-defined storage, the computation and storage of data is being converged into hybrid platforms using innovative technologies to create ever-improving storage solutions. Some of these innovations include Datrium’s server-side flash system, Drivescale’s rack-scale infrastructure pooling, and Igneous’ integration of ARM processors on each disk drive, among many others.

Of course, data storage is getting faster and more efficient, with new kinds of automatically tiered and cached storage products that form converged platforms built for analyzing and streaming data at massive scale at speeds unfathomable a decade or two ago.

Even the way storage operating systems are being written has changed dramatically, with most being containerized applications to better handle massive scale-out, availability, and easier support for converging computation within the storage.

Most of these innovations today are making low-level, manual tasks obsolete, freeing up time and resources for businesses to focus on more important details. Therefore, less time will be spent on figuring out how data is stored, and more time can be spent on actually using that data.

It’s almost impossible to predict exactly where data storage will be even just a decade from now – will we be using miniscule amounts of DNA to store terabytes of data, or perhaps storing the history of the human race on a piece of glass? Will virtually everything be moved into the cloud, or will a new system emerge?

Only time will tell, but if there’s one thing to be learned from the evolution of data storage, it’s that something interesting is always around the corner.