The world of storage has seen many ups and downs, where storage devices have shrunk with age and evolution to a point where a physical need for a device has virtually become, non-existent. 

We at memoryclearance have decided to take you on a refresher course on how storage has advanced and will advanced closer to a new decade.

SDS replaces traditional storage arrays 

The concept of SDS meets the requirements of many modern data centres: instead of bulky, expensive proprietary hardware-based solutions, standard servers can be leveraged with intelligent software to build storage infrastructures that scale granularly, limitlessly and offer extreme flexibility and efficiency. SDS initially became popular with object storage. However, object storage is struggling to make real headway in the market as its primary use case is typically active archives – and many companies find it easier to simply archive in the cloud.

At the same time, the storage industry is going through a transition as NVMe grows in popularity. NVMe-based flash, with its extreme performance benefits, lends itself perfectly for low latency and high-performance use cases such as big data analytics, simulation and high-performance computing.

Server SAN combines the concept of SDS with the benefits of NVMe and new technologies that enable the sharing of NVMe-based flash across standard networks, and it is gaining momentum with significant year over year growth. Server SANs are based on standard servers and offer considerably more operational flexibility than traditional SANs at a fraction of the cost.

Customers are switching from enterprise storage arrays to agile, software-defined architectures such as Server SAN

NVMe-based flash and (SCM) innovations 

NVMe-based flash has slowly but steadily been entering the market. 2017 was undeniably a breakthrough year: customers across all verticals started to deploy NVMe-based flash for a variety of use cases, from video analytics to supercomputing, from retail to simulation.

Technologies that share NVMe over networking fabrics (NVMe-oF), or full scale-out storage solutions that enable the use of NVMe at large scale are facilitating this trend by enabling full utilisation of NVMe-based flash.

It is not surprising that NVMe-based flash is becoming the norm for enterprises that rely heavily on fast access to their storage. NVMe-based flash is simply flash storage with an optimised controller and protocol that supersedes legacy technologies designed for slower Hard Disk Drives (HDDs), and as a result, delivers significantly improved performance and reduced latency. End users therefore require fewer drives to achieve the concurrent performance levels for their workloads.

Meanwhile, flash manufacturers have been working on even more significant innovations, namely SCM. SCM aims to fill the gap between extremely high speed, low capacity and expensive memory (e.g. DRAM or Dynamic (Random Access Memory) and storage such as flash, solid state drives and hard disk drives, which are lower in cost and performance, and higher in capacity. Technologies such as 3D Xpoint are leading the way and raising the bar of what is considered high performance when it comes to data storage.

These are the early days of SCM, but it is already clear that customers will require software to deploy and manage this. Now more than ever there is a need for software-based storage solutions that enable customers to deploy existing and future high-performance storage, including traditional solid state drives (SSD) and SSDs based on NVMe or SCM.

Cloud for secondary and on-premises SDS for primary storage

The public cloud will win the battle for secondary data. Over the last few years, the public cloud has been contending with on-premises object storage to be the secondary storage solution of choice but as public cloud offerings attract an increasing number of customers, object storage vendors have had to shift their messaging and sell hybrid solutions that enable customers to keep some data on-premises and move other data to the cloud.

As for primary data, there is a market for placing this in the cloud – especially in the case of those customers who solely use the public cloud and don’t have an infrastructure of their own. But it is not expected that customers who have their own data centre to migrate their primary data to the cloud any time soon. There are many reasons for that: latency is one, value of the data and the associated security considerations another.

In addition, a revolution is underway with block storage – for primary data storage needs. Several vendors are pushing software-defined block storage, leveraging standard servers and intelligent software.

To use the 80-20 rule, it would be safe to estimate that 80% of secondary data will end up in public clouds, and 80% of primary data will remain on-premises, but stored on more agile, efficient and flexible SDS infrastructures.