Inside The AI Data Hall

Why tomorrow’s data halls must be built for density, liquid cooling, and speed.

Over the last two years, AI has quietly rewritten the rules inside the data hall. What used to be relatively predictable steady-state cloud and application workloads are giving way to something far more volatile and demanding.

The AI boom is a new architectural era taking shape from the inside out. Rack densities are jumping by an order of magnitude. Cooling is shifting from air to liquid. Power is becoming both a constraint and differentiator as the traditional “big empty box” approach is rapidly running out of runway.

Beyond theoretical, Compu Dynamics teams are in data halls every day, installing first-of-its-kind equipment and solving problems that did not exist five years ago. The lesson is simple: In the AI era, nothing about data center infrastructure remains generic. Operators who build for yesterday’s workloads and “adapt” for AI later will pay for that decision for decades.

The better path starts with recognizing that inside the building, everything has changed.

From Generic Boxes to Purpose-Built “Data Fabs”

When large-scale cloud first took off, the prevailing design philosophy was straightforward: Build a big, flexible shell with as much white space as possible. Roughly 80 percent of the floor area was IT space; 20 percent was for mechanical and electrical support systems. That model worked when rack densities hovered around 5kW.

AI and high-performance computing are flipping that ratio. In many AI-ready environments, white space is becoming the minority. The supporting plant (power distribution, UPS, switchgear, busways, cooling, and containment systems) is the majority of the footprint.

Consider the shift: a 5MW data hall designed around 5kW racks occupied 25,000—50,000 square feet of white space. That same 5MW, optimized for 150—200kW AI racks, requires as little as 4,000—6,000 square feet. In modular configurations we are designing today, a 5MW data hall is less than 1,500 square feet. The rest of the site starts to look less like a traditional occupied building and more like an industrial plant: pipes and conduits everywhere, structured utility spines, chillers, and power modules lined up with purpose.

Meanwhile, much of the colocation and cloud world is still building large, highly adaptable shells, believing that “flexibility” will preserve value. They design for a broad spectrum of tenants and try to adapt the same building to everything. That approach feels safe, but is inherently non-optimal for pure-play AI deployments. AI tenants in those environments will pay more than they should in rent. A common response is “the shell itself is cheap.” That may be true relative to mechanical and electrical infrastructure, but it’s not free. Offering a space that is 10X—20X too large will not be competitive once capacity constraints ease and customers have true AI-optimized options.

By contrast, true AI-optimized facilities, from the ground up or modular, treat the data hall as a fabrication plant for tokens and insights. Every square foot, watt, and gallon of coolant is engineered to maximize useful compute. That is the design philosophy now guiding our teams’ AI-ready projects, where very little is stick-built. Every element is modular, repeatable, and optimized for density. The “building” is no longer the star; the power and cooling distribution systems are.

By Steve Altizer, President & CEO, Compu Dynamics

White Space as an Engineered Performance Layer

Historically, white space planning revolved around airflow and capacity: hot aisle, cold aisle, containment strategies, and a largely homogeneous load profile. In the AI era, white space must be treated as an engineered, integrated performance layer where electrical, cooling, network, and compute interact in real time.

Designing for this reality requires a deliberate approach to how every element in the white space fits, functions, and is maintained over the life of the facility.

An effective engineered performance layer features:

  • Adaptive electrical distribution to serve both conventional and high-density zones without rebuilding each time new AI hardware lands.
  • Zoned thermal strategies that allow different cooling regimes (air, hybrid, and liquid) to coexist.
  • Structural integration that sees the white space is now “heavy.” Racks, manifolds, busways, and containment systems are significant structural components that demand industrial discipline.
  • Off-site fabrication to improve quality, compress schedules, and reduce on-site labor in a market where skilled trades are stretched thinly. 

Viewed this way, the AI data hall has more in common with a petrochemical plant than with early cloud facilities. That is not a metaphor. It is where the design is heading.

Liquid Cooling: From Talking Point to Design Constraint

No topic attracts more attention today than liquid cooling. Everyone is talking about it, some are piloting it, but few have integrated it at scale reliably and repeatably.

Effectively integrating liquid cooling presents a real challenge for the industry. Three distinct paths are emerging:

  • In-rack solutions with cooling distribution units integrated into the cabinet and the operator simply connecting power and facility water. These deliver impressive densities with straightforward facility interfaces.
  • Room—or row-based distribution systems with stainless steel piping, stringent fluid quality requirements, and complex commissioning processes can serve large, mixed environments. Or, become a reliability nightmare.
  • Liquid-to-air heat exchangers allow operators to deploy liquid-cooled IT into legacy air-cooled buildings by extracting heat locally and rejecting it via the existing plant. These are now being deployed at scale, including first-of-their-kind systems our teams are installing in hyperscale environments.

Each approach carries different implications for risk, cost, and operations. One of the biggest mistakes is assuming any mechanical contractor can design and commission high-density liquid systems. The tolerances are tight, commissioning is exacting, and failure modes are unforgiving.

In the AI era, liquid cooling is a first-order design constraint to be addressed from day one, not retrofitted at the 11th  hour.

Modular Infrastructure: Strategic, Not Stopgap

Modular design has been part of the data center conversation for years, relegated to edge deployments or a tactic to add capacity at the margins of a campus. AI is changing that.

When new compute generations arrive every six to 12 months, traditional stick-built construction struggles to keep pace. By the time a conventional building is designed, permitted, and constructed, the hardware it was designed around may be two generations behind.

Modular infrastructure changes the equation in several ways:

  • Speed and repeatability through pre-engineered IT, power, and cooling modules fabricated in parallel with site work, then shipped, set, and interconnected in a fraction of the time required for conventional builds.
  • Density optimization using modular systems purpose-designed for high-density AI workloads from day one, avoiding the compromises inherent in “one size fits all” shells.
  • Reusability and redeployment of modules that can be upgraded, reconfigured, or repositioned across campuses or portfolios as GPU generations evolve without sunk cost in oversized or underutilized buildings.

Modular enables a “Bring Your Own White Space” (BYOWS) model, separating data center infrastructure into long-term utility systems (power, cooling, connectivity) and short-term white space assets that can be customized, replaced, or optimized for each workload generation.

The impact can be dramatic. While traditional hyperscale shells cost $13 million—$15 million per MW, well-executed modular campuses can land far lower—around 7 million USD—8 million USD per MW—because every element is optimized for density, not aesthetic uniformity.

Of course, modular is not the answer for every scenario. There will always be a strong demand for large-scale, multi-tenant facilities. But as AI workloads grow and diversify, blended footprints will emerge where modular AI clusters coexist with more conventional white space, either inside larger buildings or arrayed along exterior utility spines.

In that world, modular is a strategic architecture that allows operators to keep pace with the compute curve, instead of chasing it.

The Next Three Years: A Mixed, Modular, Liquid Future

Projecting three years out in an environment changing this fast is risky, but certain trajectories are clear enough to act on now. Operators should plan for a world in which:

  • AI rack densities of 200kW become common, with pockets of 600kW and above emerging at the bleeding edge.
  • Liquid cooling represents a significant share of new high-density deployments, with some AI halls designed from the outset as 100 percent liquid-cooled environments.
  • Modular solutions account for a material one-fifth of new AI capacity in markets where speed, power constraints, or land costs demand highly optimized footprints.
  • Existing data centers find new life as AI inference nodes, regional training sites for smaller models, or hybrid environments combining legacy and AI workloads. The value in these facilities will be unlocked by creatively integrating modular and liquid solutions around and within them.

Crucially, footprints will become more mixed. It will not be unusual to see a single campus combining stick-built, multi-tenant halls; highly specialized, purpose-built AI blocks; and arrays of modular IT, power, and cooling systems—all sharing common spines, utilities, and operational playbooks.

Designing for the Pace of Compute

AI is forcing us to rethink data center design assumptions. Generic templates and “good enough” flexibility will not survive the next wave. Operators who thrive will accept that infrastructure must adapt to the pace of compute—not the other way around.

That means embracing higher densities and more complex thermal regimes. It means treating white space as an engineered performance layer. It means using modularity strategically, not tactically. And it means recognizing that power and cooling are front-line enablers of business value.

At Compu Dynamics, we have been at the point of impact for decades, designing, building, and servicing facilities from 20kW labs to 100+MW campuses, across government, cloud, and hyperscale. Today, that experience has our teams deploying first-of-their-kind hyperscale liquid cooling systems, designing gigawatt modular AI campuses, and re-imagining white space as a tightly integrated industrial environment. In an era where many “don’t know what they don’t know” about AI infrastructure, operators need partners who understand and are actively building the trends coming two and three years from now.

ABOUT THE AUTHOR

Steve Altizer has nearly four decades of experience building some of the world’s most sophisticated government and commercial facilities. In 2002, Altizer founded the Andrew Browning Group (now Compu Dynamics). Prior to that, he served as a senior executive with several nationally ranked general and mechanical contractors.

Throughout his career, he has been a student of and thought leader in the technology and science behind today’s modern building environments. This interest has naturally led to an affinity for clients whose requirements drive them toward facilities that are smart, clean, safe, reliable, and secure. His focus for the last 25 years has been exclusively on data centers ranging in size from 20kW to over 100MW. Altizer earned a BS in Mechanical Engineering and an MBA, both from the University of Virginia.