Why Infrastructure, Not Algorithms, Will Determine GenAI Winners
The Difference Between Scaling and Maintaining the Status Quo

In 2024, everyone explored generative and agentic AI (Artificial Intelligence) use cases. In 2025, only those with the right infrastructure will see these scale.
That shift—from experimentation to execution—is now playing out across Asia Pacific and Japan (APJ). Speaking to channel partners, it’s evident the conversation has moved beyond proofs of concept (POCs) to how organisations are navigating production constraints, deployment roadmaps, and risk trade-offs.
At the same time, customers across the region are asking sharper questions: Can we scale these AI projects securely with the infrastructure and protocols we have? How does this play out in hybrid multicloud environments? How can we maintain full control over our data while avoiding vendor lock-in? How do we balance cost, control, and complexity?
Increasingly, the answer lies in containerisation—a technology and approach that is essential for enterprise AI success. Containerisation allows organisations to package and isolate applications with their entire runtime environment, meaning they can innovate within each container and move them among different environments such as development, testing, production, etc., while retaining full functionality.
Turning Containerisation into Strategic Capability
Containerisation enables enterprises to deploy and scale AI applications seamlessly across different environments. It works on premises, at remote locations, or in the cloud while keeping data within required borders. From our partner discussions, one thing is clear: containerisation is no longer optional. If an organisation wants to scale its AI deployments, it needs to incorporate containers.
But making the shift is not simple. Many customers are still grappling with steep learning curves and operational complexity, especially when running generative or agentic AI workloads alongside legacy enterprise software. Still, the payoff is real. Containerisation’s modular design directly addresses deployment challenges that traditional infrastructure was not built to handle.
This shift is already playing out on the ground. At our APJ partner roundtable in March, Japan’s CTC Itochu shared how enterprise priorities are evolving in real time. As Toshihiro Kan, General Manager of the Telecommunication Engineering Development Department, put it: last year was about building platforms. This year is about deciding what services to run on them.
Turning platforms into production-ready services introduces a new layer of complexity. That is why partners are evolving from implementers to strategic advisers. As Ashutosh Deuskar, Owner and Director of India’s VDA Infosolutions, observed, many customers are now running GenAI applications alongside existing enterprise software. This dual-world infrastructure demands partner expertise to manage the complexity while ensuring stability at scale.
Market Reality Check
One thing that strikes me most in these conversations is how clearly enterprises across APJ recognise the stakes. Infrastructure and deployment choices made today will determine how far and fast they can scale AI tomorrow.
The shift is driven by practical necessity. Every new application now follows cloud-native principles, making containerisation essential rather than optional. As our Australian partner Pedro Duarte, Sales Director at Think Solutions, noted, this transition requires genuine organisational commitment. Customers are investing significant time to modernise legacy applications because the operational flexibility justifies the complexity.
What is helping to accelerate deployment is that cost, once a major barrier, is now far less of an issue. As Roshan Shetty, Co-founder of India-based CitiusCloud Services, highlighted during our roundtable, GenAI token costs have dropped by roughly 70 percent. This makes deployment far more feasible. This cost reduction, combined with containerisation’s flexibility, creates ideal conditions for broader AI adoption.
What This Means for Business Leaders
This infrastructure transformation is changing how organisations think about technology partnerships. Partners now lead educational workshops and guide complex architectural decisions rather than simply implementing predetermined solutions. The focus has shifted from recommending platforms to helping customers understand what workloads belong where: on premises or in the cloud, legacy systems versus modern applications.
The growing interest in multi-vendor strategies reflects this new reality. As markets adjust to major industry consolidations, customers seek control and flexibility, not just capability. Partners help enterprises avoid overdependence on single vendors while building adaptable foundations that evolve with both technological advances and regulatory changes.
Organisations with modern, flexible infrastructure consistently deploy AI applications faster and more reliably than those trying to retrofit legacy systems. The performance gap is widening, and the window for catch-up strategies is narrowing.
The Execution Winners of 2025
Across 2025, three characteristics will distinguish the organisations that successfully scale AI from those that remain stuck in pilot projects:
- Infrastructure readiness: having modern, flexible systems that can deploy AI across any environment while still meeting compliance requirements.
- Partnership sophistication: organisations need strategic advisers who understand both technology and evolving regulatory landscapes, not just implementers.
- Operational agility: the ability to iterate quickly with AI without compromising on security, compliance, or performance.
Those who move early on infrastructure readiness are already seeing the returns: in resilience, agility, and the ability to lead in an AI-driven world.
The New Competitive Reality
The infrastructure decisions being made right now feel more critical than ever. And across APJ, the AI winners are becoming clear. What distinguishes them is not having the most sophisticated AI model(s), but the infrastructure and partnerships that enable deployment at scale.
Put simply, infrastructure readiness is the new competitive advantage.