The relentless advance of AI, machine learning (ML), cryptocurrencies, and cloud computing is dramatically reshaping data centers. Needing to support a wide array of services from high-resolution video streaming to complex AI-driven data processing that require intensive computational power, data centers are rapidly becoming one of the largest consumers of global energy resources. The International Energy Agency has projected that if current trends continue, these data centers could consume over 1,000 terawatt-hours by 2026, a stark increase from the 460 terawatt-hours recorded in 2022.
As data centers continue to grow and adapt to keep up with processing demands, they encounter significant challenges. In an era of unprecedented data throughput, power distribution strategies are evolving to ensure energy efficiency, reduced heat generation, optimized space utilization, and effective cost management.
Traditionally, data centers were designed to handle a power supply of 30-40kW per cabinet. However, the advent of high-power CPUs and GPUs, such as Nvidia’s H100 AI accelerator which contains 80 billion transistors and has a thermal design power (TDP) of 700W, demands a re-evaluation of this standard. These components, characterized by their substantial thermal design power (TDP), push the boundaries of what conventional cooling and power systems can manage. As processing power continues to scale, the power requirements for a single cabinet will exceed 200kW.