AI Data Centres and Power Quality — A New Category of Grid Disturbance
| Load type | Hyperscale AI data centres — GPU clusters, server power supplies, advanced cooling, UPS systems |
| Scale | 100 MW to 1+ GW per campus — individual facilities now exceeding the generating capacity of a small power station |
| Key PQ distinction vs. conventional DC | AI training creates synchronised GPU operation — millions of watts changing in under one second — creating oscillatory load signatures unknown in conventional data centres |
| Harmonic profile | THD often exceeding 5% — 3rd, 5वें, and 7th dominant — parallel resonance risk with grid impedance |
| Transient load ramp rate | Several megawatts per second during training burst initiation — causes voltage flicker and frequency deviation at the PCC |
| Voltage sag risk to grid | Simultaneous UPS disconnection during voltage sags — Northern Virginia: hundreds of MW disconnecting at once |
| Documented grid incident | Dominion Energy grid event triggered by once-per-second voltage sag from a data centre facility |
| Regulatory gap | No specific grid codes for AI data centre load behaviour — IEEE 1547 and equivalent European codes written for generators, not large non-linear loads |
01 Context — When Data Centres Became Grid-Scale Problems
For two decades, data centres were managed as facility-level power quality problems: large collections of single-phase switch-mode power supplies drawing harmonic currents, requiring careful neutral conductor sizing, UPS ride-through specification, and occasionally active harmonic filtering at the distribution board level. Their grid impact was negligible — a 10 MW data centre connected to a 500 MVA substation is a 2% load, not a grid stability concern.
This has changed. AI model training requires the simultaneous operation of tens of thousands of GPU accelerators, drawing power at densities of 30–100 kW per rack, in buildings of 100 MW to several hundred megawatts. In regions with high AI data centre concentration — Northern Virginia, अचंभा, सिंगापुर, the Amsterdam–Frankfurt corridor — individual transmission nodes now serve gigawatts of AI compute load. At this scale, the power quality behaviour of the data centre is no longer a facility problem. It is a grid problem.
A conventional enterprise data centre of the 2010s drew 5–20 MW with a relatively stable, continuous load profile. A hyperscale AI training facility of 2025 draws 100–500 MW with a highly dynamic load profile that changes by tens of megawatts per second. The Northern Virginia data centre corridor now hosts more than 3 GW of connected data centre load on a single regional transmission system. When a training job completes, or when a fault triggers simultaneous UPS disconnection across multiple facilities, the instantaneous load change can be comparable to losing a large generating unit — triggering the same frequency stability concerns that motivated the development of under-frequency load shedding schemes.
02 A Different Kind of Load — The GPU Training Signature
Conventional data centre loads — web servers, storage systems, networking equipment — draw power in a relatively smooth, continuous pattern. Individual servers vary their consumption with utilisation, but the aggregate of thousands of diverse workloads averages out to a stable, slowly varying total demand. This statistical averaging is why conventional data centre loads have good power factor and relatively low harmonic content at the substation level.
AI training loads break this averaging assumption. During distributed GPU training, thousands of GPUs operate in tight synchronisation — they all compute simultaneously during the forward and backward pass, then all communicate simultaneously during the gradient synchronisation step, then all compute again. This synchronised operation creates an oscillatory load signature: the entire facility alternates between high-power computation phases and lower-power communication phases at a rate determined by the training algorithm’s iteration frequency.
The loss of statistical averaging in AI training loads is fundamental — it is not a design defect that can be fixed with better power supply specification. GPU synchronisation is required by the distributed training algorithm. Every GPU in a training run must complete its gradient computation before the synchronisation step can begin, and every GPU must receive the updated gradients before the next compute phase can start. The alternating high-power and lower-power phases are an intrinsic property of the workload, not an artefact of the power supply design. Smoothing can be applied — rack-level batteries, firmware-controlled ramp rate limits, dummy workload injection during communication phases — but cannot be eliminated entirely without compromising training efficiency.
03 Power Quality Issues at the Facility Level
Harmonics
GPU server power supplies are switch-mode converters — they draw non-sinusoidal current with THD often exceeding 5%, dominated by 3rd, 5वें, and 7th harmonics. At the scale of a 100 MW AI data centre with thousands of server power supplies operating simultaneously, the aggregate harmonic current at the facility substation can be substantial. One facility cited in the literature required installation of a dedicated harmonic mitigation solution after producing excessive voltage harmonic distortion on its supply grid.
The harmonic risk specific to AI data centres — beyond what conventional data centres produce — is parallel resonance. The rapid installation of large power factor correction capacitor banks and UPS capacitor stages in high-density facilities can create resonant circuits at specific harmonic frequencies. When the facility’s harmonic current coincides with a resonant frequency of the network, harmonic voltages are amplified — potentially to levels that cause transformer overheating, protection relay misoperation, or equipment damage across the connected distribution network.
Voltage flicker and frequency deviation
The synchronised training burst load signature described in Section 02 creates voltage flicker at the point of common coupling. When the entire facility ramps from communication-phase load to compute-phase load — a change of tens of megawatts in under a second — the voltage at the PCC drops briefly, then recovers as the grid frequency regulation system responds. If this ramp occurs at a rate that falls in the 1–15 Hz frequency range of peak human visual sensitivity, it produces perceptible light flicker on other customers connected to the same substation — a community impact problem analogous to the industrial welding machine flicker described in CS06, but at vastly larger scale.
Technical analyses documented in the literature describe a real grid event on the Dominion Energy system triggered by a data centre facility producing a voltage sag at exactly once per second — the iteration frequency of a training workload. The regular, precisely timed voltage sag propagated to other customers on the same substation bus, causing systematic interference with equipment sensitive to exactly this frequency of supply disturbance. This is not a theoretical risk. It is a documented operational incident with an identified cause that the existing power quality standards framework did not anticipate — because the framework was written for loads whose disturbance frequency is either stationary (harmonics) or random (motor starts, आर्क फर्नेस), not deliberately periodic at sub-hertz rates.
Voltage unbalance and interharmonics
Large AI data centres with dense single-phase server loads across three-phase distribution systems create voltage unbalance when the loads are not perfectly balanced across phases. The neutral current from triplen harmonics — third harmonic dominant in switch-mode power supplies — adds to the unbalance problem. के अतिरिक्त, certain switching patterns in high-frequency GPU power converters produce interharmonic components — frequency components that are not integer multiples of the fundamental — which can create beat frequencies with other equipment and cause unusual interference patterns not addressed by standard harmonic limits.
04 Grid-Level Risks — Beyond the Facility Fence
At gigawatt scale and geographic concentration, AI data centre PQ behaviour creates risks that extend far beyond the facility’s own distribution system:
| Risk | Mechanism | Documented scale | Precedent |
|---|---|---|---|
| Simultaneous UPS disconnection | During voltage sags, multiple facilities disconnect UPS loads simultaneously — removing hundreds of MW of load instantaneously | Northern Virginia: 2.6 GW simultaneous disconnection risk identified | ERCOT analysis — threshold for grid instability |
| Frequency instability | Multi-MW/second load ramps from training bursts challenge frequency regulation — similar to generator tripping events | ±0.5 Hz frequency deviations documented in high-density areas | Dominion Energy grid event |
| Harmonic resonance propagation | Harmonic currents from large facility interact with network impedance — amplified at resonant frequencies | Transformer overheating, protection relay issues | Multiple documented incidents requiring harmonic filters |
| Flicker at community scale | Periodic training burst transitions at sub-hertz rates create systematic light flicker on shared substation buses | Visible on all customers at same substation | Dominion Energy once-per-second sag incident |
05 Mitigation — Technical and Operational Approaches
Mitigation of AI data centre PQ impacts operates at two levels: the facility level (reducing what the data centre emits into the grid) and the grid level (improving the grid’s ability to absorb what the data centre emits).
Facility-level measures
- सक्रिय हार्मोनिक फिल्टर (APF) and static var generators (SVG) — can reduce facility harmonic THD to below 3%. Required when the facility’s harmonic current, combined with network impedance, produces voltage THD above the IEEE 519 limit at the PCC
- Rack-level battery energy storage — buffers the training burst load transients by providing or absorbing power during compute-to-communication phase transitions. Tesla Megapack deployments at AI data centre campuses have demonstrated effective load smoothing at 100+ MW scale
- Firmware-controlled GPU ramp rate limits — software constraints that limit the rate at which GPUs increase their power draw during training burst initiation, reducing the dP/dt seen by the grid from 10+ MW/s to a controlled ramp of 1–2 MW/s
- Dummy workload injection — maintaining minimum power consumption during communication phases by running non-critical compute tasks, reducing the depth of the oscillatory signature and limiting the load swing magnitude
- Phase balancing and load redistribution — systematic assignment of server loads across phases to minimise neutral current and voltage unbalance at the facility substation
Grid-level measures
- Coordinated UPS ride-through specifications — requiring AI data centre UPS systems to maintain grid connection down to 50–70% of nominal voltage for at least one second before disconnecting, preventing the simultaneous mass disconnection risk
- Fault ride-through requirements — analogous to the requirements imposed on renewable generators under IEEE 1547 and European grid codes, requiring AI data centres to remain connected during short-term voltage and frequency disturbances rather than disconnecting to protect hardware
- Dynamic performance requirements at the PCC — specifying harmonic emission limits, ramp rate limits, reactive power support obligations, and voltage tolerance ranges as conditions of grid connection approval for facilities above a defined threshold
Multiple grid operators — ERCOT, PJM, National Grid — are actively developing specific grid connection requirements for large AI data centre loads. The direction of travel is clear: data centres above a threshold size (typically 50–100 MW) will be required to demonstrate fault ride-through capability, harmonic compliance at the PCC, and controlled ramp rate behaviour as conditions of transmission connection. Facilities that cannot demonstrate compliance will face either mandatory retrofit of harmonic mitigation and battery storage, or connection to a dedicated substation with strengthened impedance. The investment case for proactive PQ compliance is compelling.
06 Utility Power Quality Perspective
AI data centres represent the most significant new category of power quality challenge to emerge for utility distribution engineers since the proliferation of VFDs in the 1990s. The parallel is instructive: VFDs were initially installed without PQ assessment requirements, causing harmonic problems that took a decade to address through retroactive application of IEEE 519. The same pattern is already visible with AI data centres — rapid deployment, inadequate PQ requirements at connection approval, and growing documentation of grid impacts that are now driving retrospective regulatory action.
The key difference is scale. A non-compliant VFD installation affects one facility and perhaps a few adjacent customers. एक 500 MW AI data centre with inadequate harmonic mitigation and no fault ride-through requirement can affect thousands of customers across a regional substation area, and its simultaneous disconnection during a voltage sag can threaten grid stability across a transmission zone.
Utility power quality engineers are now being asked to assess grid connection applications for facilities that did not exist as a load category when their assessment frameworks were written. The IEEE 519 framework addresses harmonics. The flicker standard addresses voltage fluctuations. Neither was designed for a load that creates megawatt-per-second ramps at precise sub-hertz frequencies, that can simultaneously disconnect hundreds of megawatts in response to a grid voltage event, or that concentrates gigawatts of sensitive non-linear load on a single regional transmission bus. The engineering community is adapting — the papers cited in this case study represent the leading edge of that adaptation. But the gap between the current regulatory framework and the actual grid impact of large AI data centres is wide, and it is the utility distribution engineer who manages that gap in real time while standards committees work to close it.
सन्दर्भ
- Li B et al. “Power for AI Data Centers: Energy Demand, Grid Impacts, Challenges and Perspectives.” Energies, 19(3), 722, जनवरी 2026. DOI: 10.3390/en19030722. Open access CC BY 4.0.
- Zhang Y et al. “Electricity Demand and Grid Impacts of AI Data Centers: Challenges and Prospects.” arXiv:2509.07218, सितंबर 2025. Available: arxiv.org/abs/2509.07218
- Zhao S et al. “Technical Challenges of AI Data Center Integration into Power Grids — A Survey.” Energies, 19(1), 137, दिसंबर 2025. DOI: 10.3390/en19010137. Open access CC BY 4.0.
- NERC / ERCOT. Large Load Integration Workshop Presentations. North American Electric Reliability Corporation, April–May 2025.
- आईईईई एसटीडी 519-2022. IEEE Standard for Harmonic Control in Electric Power Systems. आईईईई, न्यू यार्क, NY, 2022.
- आईईईई एसटीडी 1547-2018. IEEE Standard for Interconnection and Interoperability of Distributed Energy Resources with Associated Electric Power Systems Interfaces. आईईईई, न्यू यार्क, NY, 2018.
Primary sources: Li B et al., Energies 19(3):722 (2026), DOI: 10.3390/en19030722, CC BY 4.0 · Zhang Y et al., arXiv:2509.07218 (2025) · Zhao S et al., Energies 19(1):137 (2025), CC BY 4.0. Documented grid incident: Dominion Energy system, as reported in Zhang et al. (2025).
SVG diagrams and the Utility PQ Perspective section (अनुभाग 6) are original IPQDF editorial content by Denis Ruest, M.Sc. (Applied), P.Eng. (ret.). IPQDF does not claim authorship of the original research.
