How Embedded and Forward-Deployed Workloads Pose Special Performance Issues

Our last blog explored how defense and intelligence tactical compute problems are really “petabyte-scale” applications. Clearly, today’s data center compute hardware can handle data sets of this magnitude. The problem for the warfighter is that he/she cannot typically deploy a conventional data center into a tactical environment – space, power, and cooling challenges alone would prohibit a “Google-style” data center in a forward combat theatre. When you add the requirement of survivability in combat conditions (hardening against electromagnetic pulses, ballistic survivability, and survivability against explosive shock), the scope of the problem becomes significant. And as we pointed out in our last blog, “calling home” for data processing isn’t an option either; forward-deployed networks don’t have the bandwidth for petabytes of data.

To put an “exclamation point” on the problem, consider the deployment of data processing assets on a modern U.S. Navy aircraft carrier. The Nimitz class aircraft carriers are the largest warships ever built (and the largest forward-deployed “presence” in the US military), with a gross weight of roughly 100,000 tons, and a length of over 1000 feet. These ships are powered by two A4W nuclear reactors; each reactor is rated at 550 megawatts of energy (though most of this goes to steam generation to propel the ship). This is why the newest class of US aircraft carrier (the USS Gerald R. Ford class) was designed with significantly increased electrical power generation for the latest-technology sensor and data processing equipment. Even with the additional electrical power generation capabilities (expected to be roughly 3 times that in the USS Nimitz class carriers), there are still huge space and environmental constraints all systems must meet (watch this video to see the shock profile that systems have to pass to be placed onboard ships).

So even in the largest, most “accommodating” environments, reducing the weight, size, and power consumption of forward deployed IT systems is critical. The typical 1U2P “pizza box” data center server consumes 450 watts of power; a rack of 32 of these servers would consume over 14 kilowatts of power. A small data center with a couple dozen rows of servers (each row can have 20 or more racks) can easily consume several megawatts. Worse yet, recent studies have estimated that more than 50% of this power consumption is used moving data within the data center. This is a problem that computational storage can help fix, while reducing the overall compute footprint (size and weight) as well, which is critical for forward-deployed workloads. We will explore how Computational Storage can help with this in our next blog.

2018-12-06T13:10:20+00:00