How Computational Storage Can Eliminate Latency in Content Delivery Networks

//How Computational Storage Can Eliminate Latency in Content Delivery Networks

Our last blog explored the challenges that content delivery networks (CDNs) encounter with content encryption, and how these challenges impact the performance and capacity of CDN content servers. In this blog, we will examine how computational storage and in-situ processing can eliminate those latency challenges, and provide CDNs with processing that scales linearly with storage capacity. In computational storage systems, compute resources are embedded directly in storage systems or devices, allowing data to be acted upon while it is still in the device. This is the concept of “in-situ” (in-place) processing. The primary benefit of in-situ processing is that it eliminates the need to move data from storage to a server’s main memory and CPU complex. For large data sets in the petabyte-range, in-situ processing can eliminate the latency associated with this data movement, as well as the power consumption and cooling required to move this data (a recent data center study showed that up to 50% of data center power and cooling is consumed moving data between storage systems/devices, servers, and access points).

Computational storage turns this dynamic on its head by eliminating the need to move data from storage devices to the server and back. This data movement creates a total of four seconds of latency for the typical 2-hour HD movie (approximately 8GB of data). Computational storage eliminates this by performing the encryption of protected data, key management, and other tasks inside the computational storage SSD. For instance, the NGD Systems Newport NVMe Computational Storage SSD, the most advanced U.2 SSD available today, combines an advanced multi-core ARM processor, accelerators for encryption and artificial intelligence, and 16 TB of data storage into a 15mm thick U.2 package. And just in case you thought that the power consumption for the encryption was simply moved from the server processor to the SSD, the Newport NVMe U.2 SSD consumes less than 10 watts of power.

Our last blog stated that a dual-processor server loses the equivalent of up to 25K sessions per hour due to data movement latency. Computational storage offers a means to recover this lost capacity, and put it to work generating revenue instead of just consuming power. If you would like to find out more about our Newport real-time computational storage devices, visit our website or contact me. Better yet, come and visit us at the NAB at the Las Vegas Convention Center, April 8-11. We will have our solutions in three locations: The Sprockit Startup Pavilion (North Hall Booth N3735SP-B), AIC’s Booth (South Hall SL4406), and the EchoStreams Booth (South Hall, SL12208). We look forward to seeing you. Thanks!

2019-04-04T08:42:44+00:00