Optimizing power electronics
With the rise of AI, machine learning, and more intensive cloud computing, modern data centers are tackling unprecedented levels of energy demand. These power-hungry applications place immense pressure on CPUs and GPUs, requiring innovative solutions to balance performance with energy efficiency. In this blog, we explore strategies for optimizing power distribution in data centers, highlighting intermediate bus converters (IBCs) as a critical component for handling increasing power loads while maintaining system efficiency.
The growing energy challenge
Emerging technologies like AI and cryptocurrency mining are accelerating data center energy consumption. According to the International Energy Agency (IEA), global data center electricity use in 2022 reached 460 terawatt-hours (TWh) and could more than double by 2026. To put this into perspective, these energy demands rival the total electricity consumption of Japan.
This rapid growth highlights the urgent need for efficient power delivery networks (PDNs). By optimizing power architectures, data centers can achieve environmental sustainability and reduce operational costs.
Scaling cabinet power
Current data centers typically supply 30–40 kW per cabinet, but future configurations may exceed 200 kW as CPUs and GPUs grow more powerful. For instance, NVIDIA’s H100 AI accelerator operates at a thermal design power (TDP) of 700 W, and next-gen models like the Blackwell B200 are expected to reach 1,200 W. Handling these power levels requires PDNs to address challenges such as managing high currents at low voltages, minimizing voltage drops and power losses, and implementing effective cooling solutions. Optimizing the PDN involves metrics like energy efficiency, power density, rack space utilization, and cost-effectiveness.