Dr Tom Harries looks at how insuring the world’s fastest-growing infrastructure class — AI data centres — is testing traditional underwriting models.
AI data centers: Trillions of dollars in capital expenditure, dizzying acronyms, grid interconnection queues, fibre and networking challenges and tech billionaires bragging about the size of their computes. Insurance is, understandably, not high on a developer’s to-do list. But with $7 trillion of capital spending worldwide expected by 2030, data centers will not be built without insurance1.
For brevity, by data center we are referring to AI data centers. We’re focusing on these as the insatiable appetite for compute has led to step changes in risk and insurance requirements. We are not distinguishing between training and inference data centers, but for the most part we are referring to the former. We are using ‘developer’ as a catch-all for all types of developers (those building the data centers) and operators (also known as tenants). The article focuses on the U.S. but most of the points are universal.
A data center consists of:
- a building shell (or core) – four walls and a roof.
- balance of plant – cooling (climate control) and power systems (on-site substation, possibly on-site power plant, uninterruptible power supply (UPS), automatic transfer switches (ATS) and power distribution units (PDU)).
- IT equipment – servers, storage systems and network systems.
Like other asset classes, the insurance structure will most likely mirror the financing structure. The common financing structure has been for the shell and balance of plant to be financed in one structure and the IT equipment in another. In this scenario, it is typical for the data center developer to lease space to a tenant. The developer can raise finance off the back of a long-term lease deal with a highly creditworthy technology company – for example a hyperscaler – such as Alphabet, Meta, Microsoft, Oracle, OpenAI, xAI. The finance is used to pay for the shell and balance of plant while the tenant pays for the IT equipment. The pool of capital eager to provide finance is broad and deep2.
In future, as the pipeline of self-build hyperscalers swells, larger, combined – shell, balance of plant and IT equipment – financing deals will likely emerge. These will, inevitably, require an insurance package to mirror the structure. Given the larger limit that will be needed (because of large data centers and combined finance deals) more insurance capacity will be required.
The complexity in insuring a data center can arise from:
- The leaps in absolute size of the latest data centers as developers strive to keep pace with the incessant demand for compute for AI training.
- Grid constraints from new, large load interconnection requests.
- Co-located power generation
- Dynamic pricing of replacement GPUs.
- Potential interfaces between the building and balance of plant, the servers and any co-located power plant
- Long construction timelines
- Technology innovation
Geographical clusters of data centers have emerged in the U.S. These include “data center alley” in Virginia and clusters in Ohio and Texas. The parameters for siting a data center differ to a power plant although the two might overlap. For example, the ERCOT power market is attractive to both. Ideally, a data center would secure a grid load interconnection in a cheap power market (ideally clean), close to fibre networks, a skilled workforce and available land. For insurers, one important distinguishing parameter for data centers is natural catastrophe exposure.
Insurers must recognise the value of international classification systems for availability and performance, for example from Uptime Institute, and not solely rely on modelled natural catastrophe exposure results to inform rating and participation3. With four tiers, starting at I and extending to IV ‘fault tolerant’, the tiering is a proxy for risk quality for an insurer – particularly for those assessing loss of revenue scenarios following physical damage. As developers are most likely to seek certification for III or IV, it inherently means reduced natural catastrophe exposure as they should be outside of a 1-in-100 year flood (which should extend to consider flood following windstorm), outside of wildfire, and minimal to no earthquake exposure. Even for residual natural catastrophe exposure, the high availability requirements of tiers III-IV means data centers must adopt robust designs with multiple layers of redundancy.
The high price of lost time in the race for the most advanced model and any associated lost revenues from end users are driving high time-based availability (uptime) rates at data centers: up to 99.995% for a data center versus 99.1% for a solar project4. It might not look like much of a difference but in time, it means less than 30 minutes of downtime in a year for a data center compared to 79 hours, or three and a quarter days, for a solar project. Maintaining such a high uptime rate requires redundancy across all critical systems.
Although certification bodies like Uptime Institute define the tiers, it does not define how to achieve the requirements. The result is that insureds must ensure its data centers are properly presented to the insurance market via clear and thorough submissions. Treating a market submission as any other construction project will not result in the best deal.
The proliferation of energy hungry data centers is an explosion in electricity load growth. For those developing in geographical hotspots, it means waiting in line for new or expanded grid load interconnections (load is a fancy word grid operators use for power consumers. In recent years, talk of grid queues was for generators). Waiting is not an option for companies wanting to say ahead in the AI race (and for some, the goal of artificial general intelligence (AGI)) and patience is not a Silicon Valley virtue. It results in data center developers building their own on-site power plants (co-located), often behind-the-meter, to serve the interim or permanent power requirements of the data center while the grid is being expanded.
The impatient hyperscalers then look for the quickest technology it can install to start generating power. In most states, it is a toss-up between solar and batteries and/or small gas generators. The resolve of the hyperscalers to adhere to environmental goals will be tested in cases where sub-megawatt gas generators are the quickest – it also has implications for which underwriters to approach5. Buying existing generating assets and co-locating a data center is an option that regulators are likely to pushback on given it could remove critical infrastructure from the grid. On-site power plants come with scheduling and interface risks.
With scheduling, the power plant needs to be ready in-time for the data center so that the two can be hot commissioned and handed over. Any delay, for example in permitting (either the power plant or data center) or in procuring valuable servers, could lead to idle equipment and temporary power solutions. Even without a claim this could have insurance implications. Extending a construction policy due to a protracted commissioning phase can be expensive – you might get the first three months at pro-rata but if it extends beyond three months or into another natural catastrophe season, the costs can ramp-up quickly. For an insured, it is important to have assessed any pre-agreed extensions against realistic delay scenarios.
As data centers get bigger, construction will take longer, and phased handovers will be commonplace. Up to now, it has taken around two and a half years to construct a data center6. Future projects are likely to start pushing four to five years. Reinsurance treaties can limit some insurers to shorter than required periods, or, if the period is within appetite, leave little room for extensions. It is imperative for insureds to be aware of the risk these might pose to extending policies in the event of delays. Policy design is also critical. For phased handovers, construction policies must intertwine with the operational policies, including early operations property damage and business interruption coverages.
When poorly defined, the transition phase between construction and operations can be a claims quagmire. Mitigation measures for an insured are to include the first year of operations on the same policy with the same insurers as the construction period and include clear definitions of handover. Phrases such as commercial operations date, commissioned, and handover, can mean different things to different parties. Clarity is required to ensure a clean transition between insurance cover during construction and operations.
It is not only the designers of IT systems that are constantly innovating. In a bid to lower operating costs, developers are innovating with cooling systems to improve efficiency (developers are chasing a lower PUE – power usage effectiveness – ratio). Here, construction and property underwriters can learn from their energy transition colleagues. Liquid cooling is now a popular cooling method given the racks of the latest data centers require more power and give off more heat than their predecessors. Liquid cooling was successfully adopted in the BESS sector to manage the higher energy density of the latest battery containers. So where liquid cooling of data centers is innovation in the strictest sense, the method is established. More aggressive innovation is reserved for demonstration projects: underwater data centers or, lately, earth orbiting data centers.
The declared sums insured could pose a problem in the event of a large market loss. Graphical processing units (GPUs) are the core computing element of the IT equipment (think AI chips) and are in high demand. There is no price transparency, and unlike with solar panels and batteries where buyers have a wide selection, today the supply of top of the range GPUs is dominated by Nvidia (Broadcom and AMD are trying to change that). A large insurance loss event, say a particularly extreme natural catastrophe near a data center cluster, could send prices for replacement Nvidia GPUs soaring. The escalation clause on a policy is typically treated as inflation protection and limited to no more than 10%. For data centers, a higher escalation provision could be prudent. Or, more creatively, linking the GPU element of the sums insured to a market price index.
Sources:
- https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers ↩︎
- https://www.projectfinance.law/publications/2025/june/data-center-financing-structures/ ↩︎
- https://uptimeinstitute.com/tiers ↩︎
- P50 system availability of 0.991. https://docs.nrel.gov/docs/fy24osti/88590.pdf ↩︎
- IEA, BloombergNEF ↩︎
- BloombergNEF ↩︎
P.S. For the nitpickers: In homage to Nardac starting in Newport Beach, California we used the American spelling — data center – rather than the British – data centre.