How big is the problem for petabyte-scale datasets?

Scott Shadley, VP of Marketing (July 16, 2018) – Today, we can store a LOT of data in a small footprint using high-performance flash SSDs. A space as little as two rack units (2U) can hold a petabyte of data. However, making use of that data is MUCH more difficult, since it must all be moved into processor RAM to be analyzed.  It is like using one truck to move the contents of a very large house across country to look for a stain on one piece of furniture. The truck would have to make a lot of trips, and most of the time would be taking moving furniture rather than looking for the stain.

The concept of MapReduce, first utilized in Hadoop big data solutions, addresses this at the distributed cluster level by moving applications to where the data is at, rather than moving the data. What if we could apply the same approach to the compute rack level? By putting intelligence in storage devices, we could significantly reduce or eliminate the need to move petabyte-scale datasets back and forth across PCI Express busses. Computational storage does just that – by providing compute resources inside SSDs, we can eliminate the need to move large quantities of data back and forth from storage devices to RAM. Keep following us to find out how NGD Systems is helping to make Computational Storage a reality. You can also read our In-Situ Processing white paper and view our Intelligent Storage video to find out more.

2018-07-18T22:03:35+00:00