Enterprises don’t just need more cloud—they need a cloud that’s ready for AI and resilient by design. At VMware Explore 2025, Broadcom advanced that agenda by making VMware Cloud Foundation (VCF) 9.0 an AI-native private cloud platform and bundling VMware Private AI Services into the base subscription. For IT leaders grappling with fragmented stacks, this reframes VCF from a “virtualization platform” to an enterprise AI substrate—with governance, cost control, and performance at its core.
The message from the main stage was unapologetically clear: for a growing class of workloads, private cloud can outperform public cloud in terms of performance, control, and total cost—especially when data gravity, sovereignty, and GPU scheduling are key considerations. Whether you buy that claim or not, the direction of travel is unmistakable: AI belongs where your data, controls, and risk posture already live.
Why it matters for Hybrid & Multi-Cloud
Most enterprises will remain hybrid for the foreseeable future — blending on-premises systems, multiple public clouds, and edge environments. By making VCF AI-native, Broadcom is reducing the friction of running private AI while also improving workload portability across these environments. The new Model Runtime capabilities and secure model endpoint sharing allow platform teams to manage AI as a governed enterprise service, rather than a patchwork of disconnected pilots — ensuring multiple business units can leverage AI without data leakage or duplicated infrastructure. At the same time, ecosystem partnerships (such as the collaboration with Canonical on containers and AI) give enterprises more flexibility, making it easier to decide where workloads should run — whether in private data centers, across public clouds, or at the edge.
What good looks like (use cases)
- Regulated analytics & RAG at the edge of data: Enterprises can keep sensitive data (PHI/PII, financial records, etc.) in place while serving business units through shared model endpoints. This enables multiple teams to utilize the same AI capabilities, with strict data isolation, auditability, and compliance built in.
- GPU efficiency for private AI: Instead of scattering models across siloed environments, organizations can consolidate training and inference workloads onto governed VCF clusters. With chargeback and FinOps policies already in place, this ensures GPUs are used efficiently, costs are transparent, and utilization aligns with business demand.
- Multi-cloud portability without lock-in: By standardizing runtime, security, and governance under VCF, enterprises can move AI workloads across environments with less friction. This creates real flexibility: you can burst to the public cloud when needed, run steady workloads in private data centers, and extend to the edge — without creating new silos or dependencies.
Executive call to action
- Make AI a platform, not a pilot. Too many enterprises are still experimenting with isolated AI proofs of concept. With VCF 9.0 and Private AI Services bundled in, executives should formalize AI as a governed enterprise platform. That means treating AI like any other strategic capability — with service-level objectives (SLOs), tenancy boundaries, lifecycle management, and chargeback models that ensure accountability and sustainable scaling.
- Land enterprise AI where data resides. The most sensitive, high-value data often lives on-prem or in regulated environments. Instead of forcing data into public clouds (with their associated costs, sovereignty, and latency issues), start by running private AI close to the data on VCF. Public cloud can still play a role in providing elasticity and ecosystem integration. Still, the default should be: keep training and inference near the data, unless there’s a compelling business case otherwise.
- Design for resilience and compliance by default. AI adoption will fail if it can’t withstand disruptions or meet regulatory scrutiny. Capabilities such as Model Runtime and secure model endpoint sharing enable resilient, multi-tenant deployments where failures are isolated and data boundaries are enforced. Executives should embed compliance, auditability, and failover planning into AI operations from day one, not as an afterthought.
- Harmonize multi-cloud operations. The hybrid and multi-cloud reality won’t disappear. Standardize IAM, networking, observability, and policy controls across VCF and your hyperscalers. This creates portability with discipline, allowing workloads to move without creating new silos. Enterprises should use this moment to extend cloud operating models (FinOps, SecOps, DevOps) consistently across both private AI and public cloud AI services.
- Measure economics transparently. AI is expensive — GPUs, egress, and operations all add up. Executives must insist on a transparent TCO framework comparing private AI on VCF with managed AI services in the cloud. This means tracking GPU utilization, energy consumption, support overhead, and opportunity costs. With these insights, CIOs and CFOs can jointly decide when to repatriate workloads, when to burst into the public cloud, and when to consolidate.
Bottom line: Broadcom’s 2025 moves reposition VCF as an AI-ready private cloud with a multi-cloud stance. Suppose you’re designing for speed with control. In that case, this is the moment to align enterprise architecture, data governance, and platform engineering so AI can scale where it creates the most durable advantage.







