The New Software-Defined Data Center

By Dean Nelson Technology Executive, Board Member, Advisor, Investor, Advocate, and Philanthropist | deannelson.com

Published in the Latest Issue | DECEMBER 31, 2019

I built my first data center 18 years ago. The challenges we faced then are not much different than they are today. Density, redundancy, mixed environments, containment, ride through, commissioning, scale, future proofing and optimizing designs for efficiency, to name a few. Over the last decade, hardware, network, and software management norms have been disrupted driving significant technical advancements. Virtualization, containers, application orchestration, and software-defined everything have transformed thinking, driven more investment, and fueled global capacity growth serving the insatiable data appetite of consumers.

Server hugging and fear of hardware failures is now the minority. This change in mindset took almost a decade to settle. In contrast, facility designs and deployments have stayed relatively the same. Most data centers still deploy 1.5-3x the required power and cooling capacity to cover potential outages, regardless of the software gains outlined earlier. They design for the lowest common denominator. Many would argue that modularity and PUE optimization have driven exceptional efficiency gains in powering IT equipment. I agree. Teams have done an amazing job at removing data center power and cooling waste compared to the previous decade. The right side of the decimal in PUE has been reduced significantly curbing the rapid growth of data center power consumption predicted in Jonathan Koomey’s 2007 EPA energy report. The exemplar in this effort has been Google who has achieved a trailing 12-month average PUE of 1.12 for all of their major global data centers since 2013. With a global portfolio exceeding 3,500MW, that is a truly impressive accomplishment.

Yet, the majority still overbuild data centers to plan for failure. Power and cooling redundancy far exceeds IT consumption, in some cases more than 4x. An example – a customer contracts 1MW of critical load, and consumes less than 50% of that capacity on average – when the facility is full. The data center provider builds at least 2MW of capacity to meet the critical load SLA. That means 75% of the built capacity is never used. Sound familiar? This was the same pattern for servers, storage and network hardware before virtualization. This may seem obvious, but why are we using data centers the same way we did in 2000?

Bottom line, data centers need to drive options for customers that go against the last two decades of thinking. Software-Defined is the future.

In all of these scenarios, there was a forcing function that required teams to innovate. The most obvious has been cost reduction. Colocation, enterprise, hyperscale and cloud data centers alike have had to get creative on reducing the cost per kW. And they have. The market price has plummeted while capacity has grown faster than any other time in my 30-year history. The challenge? PUE reduction is reaching diminishing returns and we are nearing the cost floor with the current data center design paradigm. While some colocation facilities have gotten creative by overselling capacity based on these trends, they are introducing risk if usage increases. I have seen constant manual power rebalancing happening to manage this approach. In other cases, the provider just eats the excess capacity over time as a cost of doing business. Either way, the cost floor is right around the corner.