Mike Yousef, Senior Vice President of Sales (August 3, 2018) – It sounds like the hi-tech version of a school yard taunt doesn’t it? As a kid, I took my fair share of lumps, so far be it from me to ever play the bully. At the same time, there are some legitimate questions to be answered here. Am I ugly? Well, I try hard not to be, but that is for someone else to decide. Is your storage dumb? Now that is a question worth pondering…….
In the age of multi-function why are you only good at one?
I have been in the storage industry for a lot longer than I will ever admit. In that time I’ve seen the HDD companies, and now the SSD companies, all try to make the best storage devices on the planet. They focus relentlessly on data transfer speeds, BER (bit error rates), MTBF (mean time between failures), Data Integrity, and any number of other parameters all tied to one simple premise. Namely, that their devices will keep your data efficiently stored for perpetuity. In fact one well known storage company went so far as to coin the term “Data Forever Architecture”. We all know and agree that data is being created at an incredible rate. We would also agree that we will ultimately need someplace to store that data so their premise, though misguided, is not without merit.
But why do we need a grand mausoleum to store our digital remains forever?
Wouldn’t it be far better to not bury our data on a “dumb” drive, but to rather keep our data alive through continual application of analytics, artificial intelligence, and any number of yet to be dreamed of applications?
Of course it would, but you might argue that we simply don’t have the CPU or memory bandwidth to allow for that. In fact, it has been said that in the massive datacenters of the cloud, there is a media to compute mismatch of 60x or even greater.
Figure 1: More Data than Bandwidth in Traditional Architectures
Which leads me to again ask the obvious question, why does the final resting place for our most valued digital assets have to be a uni-tasker?
The Revolution is Coming…
If there is a problem to be solved, it’s inevitable that a bunch of bored engineers will sketch out a solution on the back of a soggy cocktail napkin. Fortunately for me, those engineers also happened to be personal friends. On August 2nd, NGD Systems announced the world’s first fully integrated Computational Storage SSD’s.
These devices bring your data back to life by adding intelligence (think CPU) to those very same block devices that have done nothing more than just sit there for the last 60 years. Finally, storage can do more than just store your data!
With Computational Storage SSD’s and the patented “In-Situ” processing capabilities built in, it becomes possible to not only solve the bandwidth challenges, but more importantly to analyze the data as close as possible to where it physically resides. This concept of “CPU augmentation” allows us to use dedicated on board 64 bit processors to run real time analytics, encryption, neural networks, or most any other application (within reason) on the drive itself, and therefore bypassing the need for added host bandwidth at either the CPU or on the memory bus.
Analytics Are Massively Parallel…
At the 32nd IEEE International Parallel and Distributed Processing Symposium back in May, we shared a keynote with Microsoft Research. Two key facts about Computational Storage and “In-Situ” processing were shared with the audience. These are:
1) Large datasets require significantly less memory to process when In-Situ processing is used.
Figure 2: Memory Utilization of large Image Libraries running TensorFlow with/without Computational Storage
2) “In-Situ” processing is massively parallel and scales compute power linearly
Figure 3: “In-Situ” processing is massively parallel and scales linearly
As impressive as these results are, they do not tell the whole story.
Multi-tasking Built In…
While adding compute capability to our SSD’s has tremendous performance benefits at the application level, we should not forget that we still have an obligation to provide our customers an outstanding block storage device. With the Newport class of products, we bring to market the world’s only 14nm SSD controller design. This allows us to offer higher capacity, and lower power products than traditional “dumb” storage devices. Coming in at 8-12W and up to 64TB (depending upon the form factor) you now have the ability to not only scale compute power but can also deploy the devices en masse without overloading an already stressed power budget.
Come Learn More at FMS…
There is a lot that gets left out of a post like this but I hope you see that you can do more than just bury your data on “dumb” storage. If you really want to see Computational Storage in action, please stop by and visit us in Booth 618 at the Flash Memory Summit being held August 7-9 at the Santa Clara Hyatt/Convention Center. We’ll show you neural networks, TensorFlow, and a host of other applications, all running natively on “Intelligent” Storage devices.
Computational Storage is a new product category, and inevitably many will use the term to spin their marketing to try and convince you of something that is not. NGD Systems is an engineering driven company and we would welcome an opportunity to show you the future of storage and the revolution that is at hand. It’s Computational you know…..