As I have worked in my post-Microsoft career advising a host of different companies in the supply chain, I now get a perspective on the strong momentum of the past success of our industry. It’s more entrenched then I thought. Let’s face it: We don’t reward change, especially when the common mantra in the industry is “If there is an outage, you never get fired for doing what we did before.” Or, in other words, why change and do something new, because if there is an outage and it’s perceived that something was done differently, you lose your job.
Right now the industry is facing its biggest expansion in its relatively young history, and the last thing we want to do is change anything during this accelerated pace. The risk is too big (so they say). However, the incentives are wrong: We are optimizing for the job security of the individual as opposed the long term business success and viability. History has shown us how companies that don’t change to the market dynamics become obsolete. Who are the Kodaks, Xeroxes, and Blockbusters of our industry that fail to change because of being stuck in what made them successful in the first place? And who will be the disruptors of the digital infrastructure industry?

Grid Backup
Data centers have always had diesel generators to back up the grid when there is an outage—or, in other words, always had 2N redundancy for electrons. The implications of this approach is that whenever a data center is built and interconnected to the grid, it is relying on idle generation to be turned up or additional generation to be added. This setup means that every data center would have dedicated 2X the generation of its interconnection and someone is paying for that 2N generation. If the grid went down, the UPS was triggered, and the diesels fire up and generate electricity (electrons). This model (also known as 5-9s design for a good customer experience since the grid was only considered 3-9s) is and has been the norm for a few decades now to ensure 99.999 percent uptime.
Due to environmental permitting and sustainability pressures, many different backup systems have been emerging over the past decade. All sorts of technologies— like BESS (battery energy storage system), fuel cells, and mechanical potential energy storage techniques—have been trying to get a grip on the incumbent diesel generators. A few years ago, Microsoft announced the first 3MW hydrogen fuel cell generator, but ultimately abandoned the project because of lack of green H2 supply and cost. BESS has also been interesting, but it’s a challenge to have anything more than four hours of backup due to cost as well particularly lithium ion. While there are many battery technologies, one of my favorite emerging tech for data centers is the aluminum-air battery developed by Phinergy for use in cell towers. While historically this type of battery has been used in cell towers and is considered to be a one-time use battery, it shows promise as a diesel replacement. It’s lower cost, it’s dense, and it can offer days of energy storage where aluminum is the fuel (oxidizing aluminum plates produces electricity). From my perspective, it looks more like a generator with aluminum as the fuel. I look forward to seeing the evolution of this technology.

Onsite Generation
An area that is ripe for change is how we power data centers today and what people feel comfortable with. I previously discussed this topic within Issue 17 of InterGlobix Magazine through my article on “Power Caching,” which explored the notion of why it’s important to bring your own power for gigawatt-scale deployments. For me, there still seems to be a cultural gridlock on this notion since it’s not what was done before. (Note: I did not say “islanding the power”. I believe it always makes sense to be connected in time to the grid for other monetizing reasons / opportunities.) Since this whole story has been chronicled previously, I will leave it to the interested reader to go back to the power caching article. The good news is that due to the timing of interconnection queues, the necessity of onsite generation is beginning to be explored as a viable bridging option.
During the past decade, there were some one-of-kind exceptions to this situation, such as Microsoft’s 2016 Cheyenne deal, where my power team made the interesting hypothesis that we could take non-firm power (of which the utility had plenty and would not need to add generation) and instead provide natural gas generators (not diesel) behind the meter. These generators were dispatchable by the utility for up to two weeks per year. The beauty here was that the utility did not have to make any capital investments for additional generation, and Microsoft got much lower utility rates. Unfortunately, this type of arrangement has never been done since by Microsoft or anyone else. However, I am encouraged with the efforts of organizations like EPRI to push for flexible loads.

2N versus N+1
Onsite generation opens up significant new opportunities to change how we architect resilience and deploy infrastructure, though the momentum of legacy practices still follows us. For example, my involvement with Mainspring, a linear generator company, has given me insights on how customers are deploying resilient generation. While Mainsprings multi-fuel (NG, H2, Propane, NH4) technology and low emissions opens the door for rearchitecting power resilience, I was blown away when they told me some of their customers were installing diesel generators for backup to their onsite baseload. This was quite surprising to me, especially because of the additional cost of 2N generators (natural gas and diesel). My question right away was: Why not use a N+1 architecture, especially because natural gas utility has much higher availability than the electric grid—and the cost of generation would be almost half? The answer was that it was a diverse fuel source. This made sense at first, until I started thinking:
- Most data centers’ MEP systems are N+1 to allow for failures and concurrent maintenance. It would only follow that if N+1 is good enough for the MEP, the generation would follow the same philosophy.
- A characteristic of Mainspring Linear Generators is that they can not only operate on any fuel, but also transition fuels seamlessly during operation. This particular feature enables fuel redundancy. In other words, if—for the unlikely event there is a NG pipeline interruption—the generators could flip over to another fuel stored onsite (such as propane), which would make having storage tanks much cheaper than diesel generators. This approach effectively changes the game from a 2N electricity source (electrons) to a dual molecule source.
While this particular application may be unique to Mainspring linear generators, it does illustrate that we can start looking at infrastructure architecture and resilience differently, thereby offering potentially lower—cost solutions, faster deployment time, and perhaps even better reliability if we can see past our own biases.
Thinking Bigger
The purpose of this article is just to start the dialogue on how best to deploy onsite generation, especially if the interconnection crisis continues. I believe there are emerging technologies that can unlock power, cost savings, and deployment time, but we have to look past what we are used to and think bigger!
ABOUT THE AUTHOR
Christian Belady is highly experienced in managing data center and infrastructure development on a global scale. Currently, he is an advisor and board member of several companies in the infrastructure space.
Previously, he served as Vice President and Distinguished Engineer of Data Center R&D for Microsoft’s Cloud Infrastructure Organization, where he developed one of the largest data center footprints in the world. Before that, he was responsible for driving the strategy and delivery of server and facility development for Microsoft’s data center portfolio worldwide. With over 160 patents, Belady is a a driving force behind innovative thinking and quantitative benchmarking in the field. He is an originator of the Power Usage Effectiveness (PUE) metric, was a key player in the development of the iMasons’ Climate Accord (ICA), and has worked closely with government agencies to define efficiency metrics for data centers and servers. Over the years, he has received many awards, most recently the NVTC Data Center Icon Award, and was elected to the National Academy of Engineering.


























