Harmoniques Creux de tension Déviation de fréquence Vaciller AI Data Centres Grid-Scale PQ

AI Data Centres and Power Quality — A New Category of Grid Disturbance

Sources: Li et al. — MDPI Energies (2026) · Zhang et al. — arXiv:2509.07218 (2025) · IPQDF Case Study Series · Harmonics · Voltage Sags · Frequency Deviation · Commentary: Denis Ruest, M.Sc.. (Appliqué), P.Eng. (ret.)
Case at a Glance
Load typeHyperscale AI data centres — GPU clusters, server power supplies, advanced cooling, UPS systems
Scale100 MW 1+ GW per campus — individual facilities now exceeding the generating capacity of a small power station
Key PQ distinction vs. conventional DCAI training creates synchronised GPU operation — millions of watts changing in under one second — creating oscillatory load signatures unknown in conventional data centres
Harmonic profileTHD often exceeding 5% — 3rd, 5e, and 7th dominant — parallel resonance risk with grid impedance
Transient load ramp rateSeveral megawatts per second during training burst initiation — causes voltage flicker and frequency deviation at the PCC
Voltage sag risk to gridSimultaneous UPS disconnection during voltage sags — Northern Virginia: hundreds of MW disconnecting at once
Documented grid incidentDominion Energy grid event triggered by once-per-second voltage sag from a data centre facility
Regulatory gapNo specific grid codes for AI data centre load behaviour — IEEE 1547 and equivalent European codes written for generators, not large non-linear loads

01 Context — When Data Centres Became Grid-Scale Problems

For two decades, data centres were managed as facility-level power quality problems: large collections of single-phase switch-mode power supplies drawing harmonic currents, requiring careful neutral conductor sizing, UPS ride-through specification, and occasionally active harmonic filtering at the distribution board level. Their grid impact was negligible — a 10 MW data centre connected to a 500 MVA substation is a 2% charge, not a grid stability concern.

This has changed. AI model training requires the simultaneous operation of tens of thousands of GPU accelerators, drawing power at densities of 30–100 kW per rack, in buildings of 100 MW to several hundred megawatts. In regions with high AI data centre concentration — Northern Virginia, Phoenix, Singapour, the Amsterdam–Frankfurt corridor — individual transmission nodes now serve gigawatts of AI compute load. At this scale, the power quality behaviour of the data centre is no longer a facility problem. It is a grid problem.

The Scale Shift — From kW to GW in a Decade

A conventional enterprise data centre of the 2010s drew 5–20 MW with a relatively stable, continuous load profile. A hyperscale AI training facility of 2025 draws 100–500 MW with a highly dynamic load profile that changes by tens of megawatts per second. The Northern Virginia data centre corridor now hosts more than 3 GW of connected data centre load on a single regional transmission system. When a training job completes, or when a fault triggers simultaneous UPS disconnection across multiple facilities, the instantaneous load change can be comparable to losing a large generating unit — triggering the same frequency stability concerns that motivated the development of under-frequency load shedding schemes.

02 A Different Kind of Load — The GPU Training Signature

Conventional data centre loads — web servers, storage systems, networking equipment — draw power in a relatively smooth, continuous pattern. Individual servers vary their consumption with utilisation, but the aggregate of thousands of diverse workloads averages out to a stable, slowly varying total demand. This statistical averaging is why conventional data centre loads have good power factor and relatively low harmonic content at the substation level.

AI training loads break this averaging assumption. During distributed GPU training, thousands of GPUs operate in tight synchronisation — they all compute simultaneously during the forward and backward pass, then all communicate simultaneously during the gradient synchronisation step, then all compute again. This synchronised operation creates an oscillatory load signature: the entire facility alternates between high-power computation phases and lower-power communication phases at a rate determined by the training algorithm’s iteration frequency.

AI Training Load Signature vs. Conventional Data Centre Load Puissance (MW) 100% 50% 0% Time → Conv. DC AI DC Ramp >10 MW/s GPU compute phase Comm. phase Conventional data centre — smooth, statistical averaging AI training — synchronised GPU bursts, multi-MW ramps
Figue. 1 — Load signature comparison. A conventional data centre draws a smooth, slowly varying load — the statistical average of thousands of independent workloads. An AI training cluster creates an oscillatory signature as thousands of GPUs synchronise between compute and communication phases, with power changes exceeding 10 MW/second during training burst transitions.
The Synchronisation Problem

The loss of statistical averaging in AI training loads is fundamental — it is not a design defect that can be fixed with better power supply specification. GPU synchronisation is required by the distributed training algorithm. Every GPU in a training run must complete its gradient computation before the synchronisation step can begin, and every GPU must receive the updated gradients before the next compute phase can start. The alternating high-power and lower-power phases are an intrinsic property of the workload, not an artefact of the power supply design. Smoothing can be applied — rack-level batteries, firmware-controlled ramp rate limits, dummy workload injection during communication phases — but cannot be eliminated entirely without compromising training efficiency.

03 Power Quality Issues at the Facility Level

Harmoniques

GPU server power supplies are switch-mode converters — they draw non-sinusoidal current with THD often exceeding 5%, dominated by 3rd, 5e, and 7th harmonics. At the scale of a 100 MW AI data centre with thousands of server power supplies operating simultaneously, the aggregate harmonic current at the facility substation can be substantial. One facility cited in the literature required installation of a dedicated harmonic mitigation solution after producing excessive voltage harmonic distortion on its supply grid.

The harmonic risk specific to AI data centres — beyond what conventional data centres produce — is parallel resonance. The rapid installation of large power factor correction capacitor banks and UPS capacitor stages in high-density facilities can create resonant circuits at specific harmonic frequencies. When the facility’s harmonic current coincides with a resonant frequency of the network, harmonic voltages are amplified — potentially to levels that cause transformer overheating, protection relay misoperation, or equipment damage across the connected distribution network.

Voltage flicker and frequency deviation

The synchronised training burst load signature described in Section 02 creates voltage flicker at the point of common coupling. When the entire facility ramps from communication-phase load to compute-phase load — a change of tens of megawatts in under a second — the voltage at the PCC drops briefly, then recovers as the grid frequency regulation system responds. If this ramp occurs at a rate that falls in the 1–15 Hz frequency range of peak human visual sensitivity, it produces perceptible light flicker on other customers connected to the same substation — a community impact problem analogous to the industrial welding machine flicker described in CS06, but at vastly larger scale.

Voltage unbalance and interharmonics

Large AI data centres with dense single-phase server loads across three-phase distribution systems create voltage unbalance when the loads are not perfectly balanced across phases. The neutral current from triplen harmonics — third harmonic dominant in switch-mode power supplies — adds to the unbalance problem. En outre, certain switching patterns in high-frequency GPU power converters produce interharmonic components — frequency components that are not integer multiples of the fundamental — which can create beat frequencies with other equipment and cause unusual interference patterns not addressed by standard harmonic limits.

04 Grid-Level Risks — Beyond the Facility Fence

At gigawatt scale and geographic concentration, AI data centre PQ behaviour creates risks that extend far beyond the facility’s own distribution system:

Risk Mechanism Documented scale Precedent
Simultaneous UPS disconnection During voltage sags, multiple facilities disconnect UPS loads simultaneously — removing hundreds of MW of load instantaneously Northern Virginia: 2.6 GW simultaneous disconnection risk identified ERCOT analysis — threshold for grid instability
Frequency instability Multi-MW/second load ramps from training bursts challenge frequency regulation — similar to generator tripping events ±0.5 Hz frequency deviations documented in high-density areas Dominion Energy grid event
Harmonic resonance propagation Harmonic currents from large facility interact with network impedance — amplified at resonant frequencies Transformer overheating, protection relay issues Multiple documented incidents requiring harmonic filters
Flicker at community scale Periodic training burst transitions at sub-hertz rates create systematic light flicker on shared substation buses Visible on all customers at same substation Dominion Energy once-per-second sag incident
Geographic Concentration — Northern Virginia Data Centre Corridor Risk Regional Transmission Bus — 3+ GW AI Data Centre Load DC-A 400 MW AI training DC-B 300 MW AI training DC-C 500 MW AI training DC-D 200 MW Colocation DC-E 600 MW AI training Simultaneous voltage sag → UPS disconnection across all facilities = 2,000 MW instantaneous load loss
Figue. 2 — Geographic concentration risk. Multiple AI data centres connected to the same regional transmission bus share the same PQ environment. A voltage sag that triggers simultaneous UPS disconnection across multiple facilities can remove gigawatts of load instantaneously — a load-loss event of the same magnitude as losing a large generating unit, creating a mirror-image frequency stability problem.

05 Mitigation — Technical and Operational Approaches

Mitigation of AI data centre PQ impacts operates at two levels: the facility level (reducing what the data centre emits into the grid) and the grid level (improving the grid’s ability to absorb what the data centre emits).

Facility-level measures

  • Filtres actifs d'harmoniques (APF) and static var generators (SVG) — can reduce facility harmonic THD to below 3%. Required when the facility’s harmonic current, combined with network impedance, produces voltage THD above the IEEE 519 limit at the PCC
  • Rack-level battery energy storage — buffers the training burst load transients by providing or absorbing power during compute-to-communication phase transitions. Tesla Megapack deployments at AI data centre campuses have demonstrated effective load smoothing at 100+ MW scale
  • Firmware-controlled GPU ramp rate limits — software constraints that limit the rate at which GPUs increase their power draw during training burst initiation, reducing the dP/dt seen by the grid from 10+ MW/s to a controlled ramp of 1–2 MW/s
  • Dummy workload injection — maintaining minimum power consumption during communication phases by running non-critical compute tasks, reducing the depth of the oscillatory signature and limiting the load swing magnitude
  • Phase balancing and load redistribution — systematic assignment of server loads across phases to minimise neutral current and voltage unbalance at the facility substation

Grid-level measures

  • Coordinated UPS ride-through specifications — requiring AI data centre UPS systems to maintain grid connection down to 50–70% of nominal voltage for at least one second before disconnecting, preventing the simultaneous mass disconnection risk
  • Fault ride-through requirements — analogous to the requirements imposed on renewable generators under IEEE 1547 and European grid codes, requiring AI data centres to remain connected during short-term voltage and frequency disturbances rather than disconnecting to protect hardware
  • Dynamic performance requirements at the PCC — specifying harmonic emission limits, ramp rate limits, reactive power support obligations, and voltage tolerance ranges as conditions of grid connection approval for facilities above a defined threshold
The Regulatory Direction of Travel

Multiple grid operators — ERCOT, PJM, National Grid — are actively developing specific grid connection requirements for large AI data centre loads. The direction of travel is clear: data centres above a threshold size (typically 50–100 MW) will be required to demonstrate fault ride-through capability, harmonic compliance at the PCC, and controlled ramp rate behaviour as conditions of transmission connection. Facilities that cannot demonstrate compliance will face either mandatory retrofit of harmonic mitigation and battery storage, or connection to a dedicated substation with strengthened impedance. The investment case for proactive PQ compliance is compelling.

06 Utility Power Quality Perspective

AI data centres represent the most significant new category of power quality challenge to emerge for utility distribution engineers since the proliferation of VFDs in the 1990s. The parallel is instructive: VFDs were initially installed without PQ assessment requirements, causing harmonic problems that took a decade to address through retroactive application of IEEE 519. The same pattern is already visible with AI data centres — rapid deployment, inadequate PQ requirements at connection approval, and growing documentation of grid impacts that are now driving retrospective regulatory action.

The key difference is scale. A non-compliant VFD installation affects one facility and perhaps a few adjacent customers. Une 500 MW AI data centre with inadequate harmonic mitigation and no fault ride-through requirement can affect thousands of customers across a regional substation area, and its simultaneous disconnection during a voltage sag can threaten grid stability across a transmission zone.

Références

  1. Li B et al. “Power for AI Data Centers: Energy Demand, Grid Impacts, Challenges and Perspectives.Energies, 19(3), 722, Janvier 2026. DOI: 10.3390/en19030722. Open access CC BY 4.0.
  2. Zhang Y et al. “Electricity Demand and Grid Impacts of AI Data Centers: Challenges and Prospects.arXiv:2509.07218, Septembre 2025. Available: arxiv.org/abs/2509.07218
  3. Zhao S et al. “Technical Challenges of AI Data Center Integration into Power Grids — A Survey.Energies, 19(1), 137, Décembre 2025. DOI: 10.3390/en19010137. Open access CC BY 4.0.
  4. NERC / ERCOT. Large Load Integration Workshop Presentations. North American Electric Reliability Corporation, April–May 2025.
  5. IEEE Std 519-2022. IEEE Standard for Harmonic Control in Electric Power Systems. IEEE, New York, NY, 2022.
  6. IEEE Std 1547-2018. IEEE Standard for Interconnection and Interoperability of Distributed Energy Resources with Associated Electric Power Systems Interfaces. IEEE, New York, NY, 2018.
Source & Attribution

Primary sources: Li B et al., Energies 19(3):722 (2026), DOI: 10.3390/en19030722, CC BY 4.0 · Zhang Y et al., arXiv:2509.07218 (2025) · Zhao S et al., Energies 19(1):137 (2025), CC BY 4.0. Documented grid incident: Dominion Energy system, as reported in Zhang et al. (2025).

SVG diagrams and the Utility PQ Perspective section (Section 6) are original IPQDF editorial content by Denis Ruest, M.Sc.. (Appliqué), P.Eng. (ret.). IPQDF does not claim authorship of the original research.

Faire défiler vers le haut