One definition of ‘Artificial Intelligence’ is software that mimics and generates human-like behaviors. Whether this is desirable is a subject of debate and there will be many who would prefer guaranteed rational and deterministic responses from the machines and automated services that surround us. However, AI is here to stay and, supported by the cloud, it represents a business that is worth around $136B in 2022, projected to rise to nearly $2T by 2030, according to Grand View Research. One worthy function of AI is the minimization of energy consumption both in the home and in industry, but AI computing itself consumes huge amounts of power, both in the ‘learning’ phase and in routine use. The largest data centers require a feed of more than 100MW as a result and this represents a high cost in dollars and to the environment, through the resulting carbon footprint. We’re considering AI, but of course other increased burdens include crypto currency mining, the IoT and social media/streaming.
A success story has been the relatively slow growth of energy consumption of data centers compared with ballooning data throughput and this has been largely due to continued improvements in energy efficiency of hardware and their power supplies. However, rack power density requirements are said to be 3x higher for AI than traditional data center functions and further efficiency gains get progressively more difficult to achieve. As a result, system designers are continually revisiting their power delivery architecture to look for ways to improve efficiency in the process of converting utility AC down to the sub-1V DC levels often required for GPUs, CPUs, FPGAs and ASICs.