Bylines

Asia-Pacific Hyper-scalers Expanding Beyond Core Locations for Proximity and Speed

by Krupal Raval, Managing Director, Equinix xScale Data Centres

Hyper-scalers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have become a ubiquitous part of our lives. From searching the internet to online shopping, to storing data in the cloud, few of our online transactions escape the reach of hyper-scalers. And like other businesses, hyper-scalers have to continually pursue new business opportunities while tracking the evolving needs of their customers and partners.

As hockey great Wayne Gretzky once observed when asked about his proficiency at scoring goals, “I skate to where the puck is going to be, not where it has been.”

Similarly, hyper-scalers, who have focused traditionally on operating large-scale data compute and storage facilities in core locations–either their own or as leased colocation space–are increasingly responding to opportunities with customers moving to the edge, where data is generated and used. Edge computing allows customers to reduce latency, improve performance and support new use cases such as IoT, autonomous vehicles and augmented reality.

The Evolving Digital Infrastructure Landscape
The 2023 Global Interconnection Index (GXI)  study from Equinix reinforces this trend, stating that hyper-scalers are expanding bandwidth faster in the Asia Pacific than any other region worldwide, registering a Compound Annual Growth Rate (CAGR) of 39%. In the new economic reality, governments in digital-first economies such as Singapore are focusing on digital technologies as a growth driver, outlined within recent Budget 2023 measures. This will inevitably lead to a surge in data traffic, boosting the growth of hyper-scalers.

Like other leading service providers, hyper-scalers are building out edge infrastructure at a rate 150% faster than their digital core. It’s important to note, however, that cloud (core) and edge computing are not competing alternatives, but rather part of a continuum. The challenge for hyper-scalers is to place colocation, compute and storage services in locations where enterprises can use those services most efficiently and cost-effectively.

Historically, the data centre and network infrastructure sectors have taken a geographically centralized approach to the location of key facilities and assets in Tier 1 markets such as Singapore and Hong Kong. This approach enabled efficient management and control of resources, took advantage of economies of scale, and facilitated speedy transactions among network participants.

As the internet has matured and experienced accelerating levels of adoption along with increased demand for private connectivity via interconnection, this centralized approach has become less practical. Growing population density, community resistance to large data centres, and local regulation have made it increasingly difficult for hyper-scalers to acquire land or the electrical power required to open new facilities in centralized locations. Instead, they deploy infrastructure in wholesale colocation facilities – without the hassle of finding space and power and the expense of making CAPEX investments. The centralized approach also lacks the offsite redundancy required for business continuity and disaster recovery.

Hyper-scalers move to more distributed and decentralized architectures
Hyper-scalers are now taking a more distributed and decentralized approach to deploying their services. Where they previously might have built a data centre in a core (Tier 1) market, they are now more likely to build or colocate “overflow” operations in emerging markets where land and power are more available. By connecting these new data centres to core markets via submarine cables, they can create a data centre and network hub configuration. Market analysts believe that this shift in data centre builds and deployment could eventually lead to more localized or in-country architectures.

Hyper-scalers typically have the resources to build their own facilities in edge locations or emerging markets, but it often makes sense for them to go the colocation route instead. This approach gives them time to familiarize themselves with a local market, staff up their operation and generally take a “wait and see” approach should the new market opportunity not warrant the investments originally planned.

If hyper-scalers do decide to build out their own data centres, their presence in the market can actually benefit colocation providers, since hyper-scalers are notorious for underestimating their data centre needs. The mismatch between hyper-scalers’ needs and capabilities can often produce significant overflow demand for colocation providers.

Local cloud services gain popularity at the edge
It’s unlikely that hyper-scalers will self-build many low (1-5 MW) power facilities at the edge since they don’t scale easily. One trend gaining popularity among these cloud providers, however, is the creation of local or edge zones. Local zones place compute, storage, database and other cloud services close to large population and industry centres—often in on-premises environments. They allow customers to run applications that require low latency and local data processing in a cloud-like environment.

In countries with little or no in-country cloud infrastructure, local zones—think of them as edge data centres—enable hyper-scalers to deploy local-cloud infrastructure to serve local enterprises and compliance-sensitive verticals such as government agencies. In countries with more mature cloud infrastructure, local zones can be placed at remote edge locations that lack the critical mass of end users to justify a regional cloud deployment.

Hyper-scalers partner to serve edge customers
Another way for hyper-scalers to build out edge infrastructure while staying focused on their core competencies is to establish regional or global partnerships with enterprises serving customers already doing business at the edge. Such partnerships offer edge customers the low latency and high performance they demand plus the quality and reliability of cloud-based compute and storage infrastructure. Hyper-scalers can also offer cloud-based compute and machine learning resources as a fabric for edge customers to use as needed.

Some hyper-scalers, for example, are teaming up with local telecom companies to support real-time, low-latency workloads among local users using autonomous navigation or augmented reality. These partnerships involve installing dedicated hardware in the same racks and cages that telcos currently use to provide connectivity.

Another application where hyper-scalers can offer cloud capabilities to edge users is manufacturing. If a company uses image recognition to spot defects on a manufacturing line, for example, it’ll likely want to use local, real-time surveillance tools to spot defects in products before those products are packaged and shipped. If, however, they plan to use product manufacturing data to predict costly failures of manufacturing equipment they could develop machine learning models of equipment in a cloud environment. Then they could upload and run analytics on product manufacturing data using that model, all without the overhead cost of managing their own computing infrastructure.

Hyperscale market expansion is likely
By far, the “big four” hyper-scalers globally are AWS, Microsoft Azure, Google Cloud and Meta, representing about 78% of global capacity according to Structure Research. But a second tier of hyper-scalers and subscale providers—fast-growing companies that aspire to consume at hyperscale levels—is emerging.

Oracle Cloud is clearly in the hyperscale category and focused on catching the leaders. Other would-be hyper-scalers include Zoom, NVIDIA and Apple, each with different degrees of interest in leasing vs self-building. In some ways, their trajectory to “hyper-scaledom” is affected by the leasing and building behaviours of the big three: current hyper-scalers can effectively squeeze out opportunities for new hyper-scalers by leasing all the available data centre capacity or buying up all the land for building new data centres. The cost and availability of capital to would-be hyper-scalers is also significant factor in their future growth.

Expect a dramatic rise in hyperscale activity
The last few years of pandemic-driven growth of digital infrastructure have added momentum to major trends in cloud computing already in place. According to Graphical Research, Asia-Pacific’s hyperscale data centre market is estimated to register a CAGR of 18% between 2021 and 2027, reaching approximately US$10 billion in 2027.

Over the next decade, we expect to see the continued expansion of core markets while hyper-scalers, regional service providers and other enterprises continue to decentralize their digital infrastructure to reach the edge or other strategic, difficult-to-access locations. The move to the edge is all about getting infrastructure closer to end users in the most convenient, most cost-effective way, whether users are in emerging markets or just remote locations of well-developed countries.  

Hyper-scalers will no doubt drive this decentralization, either as part of their core business or in partnership with others. One way or another, however, data centre colocation providers are likely to be the foundation on which this future is built. 

DSA Editorial

The region’s leading specialist IT news publication focused on Data Lifecycle, Storage Infrastructure and Data-Driven Transformation. DSA has nearly 17,000 e-news subscribers, over 6500 unique visitors per day, over 20,000 social media followers and a reputation for deep domain knowledge.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *