We are in the midst of the biggest shift in the history of computing, and it is completely transforming the data center industry. That became clear at NVIDIA’s recent GTC 2026 event, the premier showcase for major innovations in advanced computing. Salute is proud to be the first Data Center Operations and Commissioning partner in the NVIDIA Partner Network (NPN), which makes GTC an important venue for meeting with our partners and customers. The events are typically a year apart, but the 2025 conference was only five months ago. Despite that short turnaround, there has been a stunning amount of change.
Here are key takeaways from the event:
The Growth of AI and shift to agentic workloads require more stringent operational models and faster RFS for data centers
AI is no longer an emerging workload, it is driving an unprecedented surge in data center demand. The scale of infrastructure required to support next-generation AI represents one of the largest technology expansions in history, with operators under increasing pressure to deliver capacity at speed and at scale.
In his keynote, NVIDIA CEO Jensen Huang underscored this shift, announcing $1 trillion in demand for Grace Blackwell and Vera Rubin GPUs. This level of investment signals a step-change in infrastructure requirements, not incremental growth, as AI applications move rapidly from development into production environments.
But the demand story does not stop at scale, it is being accelerated by the evolution of AI workloads themselves. The industry is moving from traditional large language model training, to more advanced reasoning models, and now to agentic AI systems that operate continuously and autonomously. These agentic workloads, enabled by platforms such as NVIDIA’s NemoClaw and OpenClaw, draw compute continuously and require significantly higher levels of processing, resiliency, and operational precision.
Salute’s Chief Product Officer AI and Learning Officer, John Shultz commented:
“Jensen’s update about agentic AI is big news. Unlike past waves of AI deployments, which focused heavily on model building and training, there is a major shift toward agentic workloads made possible by NVIDIA’s NemoClaw and OpenClaw platforms. This directly impacts data centers because agentic workloads are very different. Agentic workloads draw compute continuously and involve very high processing, which requires operational environments that are rigorous and exacting. You need the right operational strategy to deliver performance, mitigate risk and prevent costly downtime. Because of our close partnership with NVIDIA, Salute is ahead of the curve on this need and offers the only AI operations-as-a-service that meets the rigorous needs of agentic AI workloads and can keep pace with GPU deployments of this scale.”
At the same time, this rapid acceleration in demand introduces a parallel constraint: the ability to build, staff, and operate these environments at the pace required. Enabling this next phase of AI infrastructure requires more than capacity, it requires operational models, trained personnel, and delivery capabilities that can scale alongside the technology.
Salute supports organizations enabling this shift with operational models, trained personnel, and commissioning expertise designed to reduce time-to-ready and ensure performance at scale.
Inference AI and the next wave of AI factory expansion requires large scale development sites and operational models that can scale in new territories
Another key takeaway from Huang’s keynote address was the impact that Inference AI will have. To date, AI-focused data centers have been centralized, typically in major markets, and focused on model training. But the next phase of AI will be decentralized. Inference AI will deliver AI-powered services and insights to users in real-time through a large number of localized facilities that minimize latency by placing GPU-driven processing as close as possible to users. At the same time, large AI factories focused on model training can be moved to remote locations with lower energy costs.
Salute’s AI/HPC Lead for EMEA James Feeney commented:
“To support the growth of Inference AI data centers and the shift of LLM workloads to remote facilities, data center operators will need a rigorous DTC liquid cooling strategy that can scale in size and geographic reach. Each of these Inference AI data centers will support a massive volume of requests and require a rigorous operational model that can be consistently delivered across a growing number of facilities spread out across a geographic area. Data center operators will also have to ensure operational excellence at remote AI factories where staffing and support are harder to come by. These will be major tests of companies’ operational models. That is why so many companies are picking Salute as their operational partner. We have already been selected by customers who as they scale their data ceners will be supporting several hundred billion dollars of the GPU orders that Jensen announced at GTC, and our ability to support Inference AI and remote AI Factories is a big reason why.”
As AI infrastructure expands into both dense urban environments and remote, power-advantaged locations, the workforce challenge becomes more acute. These facilities require highly trained personnel who can operate advanced liquid cooling systems and manage AI-driven workloads, often in regions where experienced data center talent is limited.
Salute enables consistent global operations through scalable workforce deployment, specialized training, and on-the-ground support across both primary markets and emerging AI infrastructure locations
Huang’s keynote also put a spotlight on how quickly data center design is evolving. He announced a new reference design for AI factories as well as a platform for building digital twins that can drive enhancements to designing and operating liquid-cooled data centers. Salute’s CTO Matt Renner observed:
“This is a bold vision of continuously evolving data center design, which means that operational models need to be equally dynamic. This is exactly what Salute’s DTC Liquid Cooling Operations Service is designed to do. Through our work with NVIDIA and our ecosystem of partners, we can work with this stringent criteria and deliver an operational model that adapts as technologies and best practices evolve. To future-proof your AI factory, you need an operational model that keeps pace with the accelerating changes in the AI industry. Salute is the only company that can deliver that.”
Equally important is the human dimension of this transformation. As data center designs shift toward fully liquid-cooled, high-density AI environments, the skill sets required to operate them are evolving just as quickly. Traditional training pathways are not keeping pace with the demands of AI infrastructure, creating a widening gap between technology deployment and workforce readiness.
Salute delivers future-ready data center operations through advanced training programs, global recruiting capabilities, and operational frameworks built for next-generation AI infrastructure.
How Salute is enabling AI factory deployment quickly
Meeting this challenge requires more than incremental hiring. It demands a structured approach to workforce development, including specialized training programs, global recruiting strategies, and standardized operational methodologies that can be deployed consistently across regions. Salute is addressing this need by investing in scalable training and recruitment models designed specifically for AI infrastructure, ensuring that customers have access to qualified teams capable of supporting next-generation data center environments from day one.
For more information about how Salute supports the needs of AI computing, visit: https://salute.com/ai-hub/.