Key Trends to Look Forward To In the Storage Area Network Market during the Next Decade

The total amount of data doubles almost every year and more than 90% of companies would collapse and be unable to compete following a data loss. Businesses need to secure, access and manage their ever-increasing volume of assets – both physical and knowledge. The data centre landscape has substantially changed in the last decade or so. Storage is being seen as less of a separate subsystem (such as a separate SAN box) and more of ‘software-defined storage’. Companies are seeing the synchronisation between networks, applications and storage and managing them all simultaneously. Storage is an integral part of the entire system and tightly integrated with all these. Performance requirements are also increasing by leaps and bounds and SSD-level performance is now the minimally acceptable level. There are several technologies that are anticipated to shape the Storage Area Network market in the years ahead. Some of them include Copy Data Management, Erasure Coding, Next – Generation Storage Networking, Object Storage and Software- defined storage appliances.

  1. Copy Data Management

Managing multiple physical copies of identical data from different tools is an expensive, time – consuming process and can even pose a potential security threat for company management. That is why Copy Data Management uses a single live clone to backup, archive, replicate and provide additional data services. It is forecast to be one of the major trends this year and in the near future. Several start-ups have unveiled products along with traditional vendor companies such as Hitachi Data Systems, Catalogic Software, NetApp and Commvault. Some companies provide software that helps separate data from infrastructure while consolidating the process of siloed data protection, while others converge all the secondary storage workloads with a 2U appliance that functions as the building block for the scaled-out architecture.

CDM differs dramatically from traditional storage management as it streamlines the silo process where customers might use multiple tools to deal with data from different vendors. The goal for all products is to strike the right balance between accessible and secure data by controlling the number of duplicate copies of confidential data created with more traditional data protection methods.

  1. Erasure Coding

Increased adoption of cloud based backup storage, object storage and Hard Disk Drives (HDD) have made the temperature of erasure coding rise over the past few years. Exabyte and petabyte-scale data sets using RAID unsustainable and erasure coding seems to be one of the few technologies that can make data protection feasible for a much larger amount of data, typically above 6 TB. If high capacity drives are put in an array, the recovery process through RAID can take weeks. It can be shortened to mere hours with the use of erasure coding. Erasure coding uses a mathematical equation to divide data into multiple fragments, and then places each individual fragment into a different location within the storage array. The process adds redundant data components, and a component subset is utilised to recreate original data should it be lost or corrupted in the process.

The primary objective of erasure coding is enabling drives to rebuild faster and the process of copying the data and scattering it across different drives is quite similar to RAID. However, the key difference lies in data longevity and the scale. If the data gets lost or corrupted, only a few ‘erased’ fragments are required to recreate the drive. This method also preserves the integrity of data by allowing multiple-drive failures without any degradation in performance. Erasure coding is ideal for an object storage system i.e. a multi-node storage infrastructure and a scale-out.

  1. Object Storage –

Unlike a file storage system, an object storage system will store data in a flat namespace with individual identifiers that enable data to be recovered without a server knowing where that particular data is stored. The namespace also allows a much larger amount of metadata to be stored than on a traditional file system, making management of data and automation much easier for the system administrator. The technology is now being used for long-term data backup, file-sharing and retention. Until very recently, these systems were severely limited as they used a REST protocol based on proprietary hardware. Now, they are packaged in such a way that IT is able to exploit them to the fullest. They provide protocol including CIFS, NFC, and iSCSI and they are also proving to be much more cost-effective at the backend, spurring adoption.

Abhishek Budholiya

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.