Contact us

Scaling at speed

Christopher Butler, President, Industrial Business at Flex
by Christopher Butler
President, Embedded and Critical Power
Posted on
February 6, 2026

Where does power architecture design start?

At the chip. When you start looking at the number of chips being deployed and how that impacts power consumption at the server, you can extrapolate what the power architecture needs to look like. For instance, if just 20 percent of the chips in the data center are switched from CPUs to GPUs, you’re going to need three times the power. And when you’re condensing 1 MW of power into a single rack, it will obviously necessitate a change in the power architecture to deliver energy in a form factor that customers can use for their AI and HPC workloads. 

Is liquid cooling the de facto standard now?

Yes, for data centers processing AI workloads. Direct-to-chip liquid cooling is the most effective way to remove heat from racks as dense as the ones we’re seeing. We acquired JetCool specifically for that reason. The microconvective cooling technology behind their cold plates, modular CDU and liquid-to-chip products has been a game-changer in terms of supporting AI and HPC applications. And our partnership with LG will enable us to deliver prefabricated, scalable data center infrastructure solutions that incorporate advanced liquid and air cooling technologies to address thermal management challenges from end to end.

Server racks filled with densely packed hardware inside a data center.

Direct-to-chip liquid cooling is the most effective way to remove heat from racks as dense as the ones we’re seeing.

Can anyone really scale power and cooling solutions at the speed data center operators are looking for?

We can — and I don’t say that lightly, because our competitors cannot. Flex has been manufacturing complex, technologically sophisticated products at scale for customers around the world for 50 years. It’s our superpower. When we apply that global reach and regional expertise to our portfolio of power, cooling, and IT infrastructure solutions, we deliver something no other company can. Scalability with speed and accuracy is right in our wheelhouse, and we continue to add manufacturing square footage in key locations to support these data center buildouts. Not only that, we manufacture our own portfolio of modular CDUs, PDUs, 1 MW power racks, and power pods and skids that help customers scale even faster. 

So “grid to chip” isn’t just marketing hyperbole? 

Absolutely not. Our portfolio spans the entire data center power chain, from the utility to the rack to the chip. We design, manufacture, integrate, and deploy it all at scale. And not only that, we co-innovate new solutions with our customers as power architectures change to support AI-era compute requirements. That includes the cooling systems that coincide with them. For instance, we’re working with them to reduce transition losses as architectures move toward +/- 400 VDC and 800 VDC systems for greater efficiency and cost savings, including designing solutions that bring medium voltage power directly into the rack and eliminate some power transitions within the rack itself. Designing for efficiency from the get-go presents a lot of opportunities for us and the data center industry as a whole. 

Technician assembling a power module, connecting wiring and components inside a metal enclosure.

Our portfolio spans the entire data center power chain, from the utility to the rack to the chip.

Has bespoke given way to standardization?

We still build to specific customer needs and regional requirements, but as we’ve seen through efforts such as the Open Compute Project (OCP), standardization, modularization, and replication are the fastest way to scale. All of our customers are talking to me about building blocks like CDUs, PDUs, and power pods — preconfigured, pre-commissioned, pre-tested, and pre-validated systems that unlock the scale they need. 

What’s the unsung hero in all this?

Speed to deployment. Once the site is selected and power access is secured, the timeline for a hyperscale data center build averages about 18 months, and every data center operator would shorten that if they could. So, what happens after manufacturing is incredibly important, from right place/right time delivery through commissioning, inspection, and energization. The more we can do in the factory, the faster they can move on site. What if we could get it down to 30 to 60 days? Perhaps the “sizzle” isn’t just the technology. It’s the ability to reduce the time it takes for our data center customers to become operational. 

High‑voltage electrical switchgear cabinets inside a data center power room.

Once power access has been secured, are there other limiting factors that impede rapid scalability? 

The equipment required to upgrade the electrical grid and route power to and through the facility is in high demand. Lead times have extended from months to a year or more for large power transformers, high-voltage switchgear, diesel and natural gas backup generators, and specialized cooling systems.

In the face of unprecedented demand, the companies that manufacture the equipment are dealing with material shortages, manufacturing capacity limitations and other supply chain constraints, and it’s bringing this issue to a head.

It’s one reason you see Flex continually expanding our manufacturing capacity by repurposing space, leasing facilities, acquiring complementary companies, and forging strategic partnerships. Building agile, resilient supply chains is a big part of that equation, too. 

This is such a rapidly evolving industry. What else are you thinking about in terms of the future state of data center power? 

Grid constraints aren’t going away anytime soon, and data center operators are exploring onsite power generation beyond backup generators to ensure performance, reliability, and scalability. How will onsite nuclear modules, solar/wind/geothermal microgrids, natural gas turbines and other energy sources affect the data center power infrastructure and its interaction with the grid? Do they eliminate or exacerbate equipment shortages?

At Flex, we’re primarily focused on the management and transmission of power rather than how it’s generated, but we have experience in utility power that goes well beyond a standard data center deployment and are excited to be able to harness that broad knowledge base in these industry discussions. 

If there’s one thing missing from this capacity build-out equation, what is it? 

Workforce preparation. We’ve all heard about the shortage of electricians, plumbers and other tradespeople needed to build capacity, but few are talking about the people needed to inspect and commission projects or troubleshoot power quality and efficiency issues, among other skills. And there aren’t a lot of people who understand the next-gen +/- 400 VDC and 800 VDC power architectures, either. Preparing for the future isn’t just about technology. It’s about talent. There are rewarding careers in the power field. We need to create a path for people. I think it’s a really exciting time in a great industry. 

Take a deeper dive on our approach to data center power, heat, and scale challenges in this Data Center Dynamics podcast and article in Data Center Frontier