3 Threats to Your AI Alpha: Latency, Liquidity and the Legacy Box


For all the talk about the benefits of AI agents and larger, more powerful, omniscient language models, few want to discuss the essential plumbing. But in Hong Kong’s financial district, where milliseconds count and regulations multiply due to the proximity of different jurisdictions, companies are beginning to realize that it is their fundamental infrastructure that will ultimately propel their AI dreams to success – or spectacular failure.

So when Digital real estate And BT International Recently, financial services executives gathered in Hong Kong for an executive lunch titled “Winning with AI: When your data infrastructure becomes the new battleground” on the sidelines of the 8th Hong Kong Chief Digital and Data Summit. They discovered an uncomfortable truth: It seems that most organizations are trying to run AI marathons in shoes designed for walking.

Asia Pacific’s Triple Threat to AI Adoption

A Ecosystem Survey 2025 across the Asia Pacific region has highlighted what’s keeping FSI technology leaders up at night when it comes to AI. Insufficient IT infrastructure in key locations, inadequate data systems and tools, and a critical shortage of qualified technical talent were the top three concerns.

Perhaps the most telling was this statistic from Ecosystem Survey of Hong Kong financial services organizations: While all financial services organizations surveyed are experimenting with or have adopted AI, only 22% have a data strategy that considers all aspects, including data localization.

Roddie Samuelvice president of sales for Asia Pacific at Digital Realty, who shared these research findings with the lunch audience, observed: “Less than a quarter have the ability to access data, but they’re all trying AI. So clearly data is a big problem.”

This data problem is also architectural. Many organizations have built their existing infrastructure in a reactive manner, designed to perform specific functions. AI requires something different: an integrated system capable of making real-time decisions based on data from multiple sources.

“Being able to access the data, being able to configure your infrastructure to be able to access the data,” Samuel explained, is now the main challenge. “You need to build that basic infrastructure before you can actually embark on the AI ​​journey.”

Decoding the physical constraints of high-density AI

AI workloads go beyond simple storage and server infrastructure. How they are hosted also matters. Modern workloads running on advanced hardware require cooling systems that traditional data centers were not designed for. Many existing facilities literally cannot support the hardware needed for advanced AI, leading to a data center boom.

For example, “liquid cooling for next-generation AI hardware,” Samuel noted, citing a companion paper. Forrester report on theThe top seven drivers of technology investments in AI infrastructure and data centers“, fundamentally changes the dynamics of how data centers are built and operated.” This implies that specialist data center operators now hold a competitive advantage that companies cannot easily replicate in-house.

Infrastructure change also extends beyond powering and cooling facilities. The basic element has changed. The Forrester report proposes the rack as a new building block for better scalability. This is one reason why companies now sell pre-filled racks as individual units. These may be larger and more expensive deployments, but they are nevertheless easier to integrate and scale.

The Network Bottleneck: Latency, Compliance, and Rigidity

While data centers continue to dominate headlines, networks are the invisible backbone that determines the success or failure of AI. David Dykesregional director for Northeast Asia and Japan at BT International, identified three critical divides in the current financial services infrastructure:

  • Latency and Data Gravity: Moving huge volumes of data creates drag on the network.
  • Compliance fragmentation: Data is distributed across sovereign geographies. Without visibility into network flows, organizations cannot properly manage compliance risks. “If you can’t see it, you can’t handle it,” Dykes observed.
  • Operational Rigidity: Existing networks require weeks or even months to provision changes. AI requires immediacy—the ability to adapt resources as workloads fluctuate.

Next-generation networks promise what legacy networks can’t: dynamic scaling, built-in security, and, most importantly, visibility into data flows.

“The new networks are configured to give you connectivity to data centers, hyperscalers and SASE providers,” Dykes explained. “Everything is programmed into the network to facilitate this access.” Security solutions are built into the network structure rather than built in afterward.

Geopolitical Friction: Managing Data Localization Requirements

Hong Kong presents a microcosm of the challenges facing financial services globally. Data residency requirements vary significantly between jurisdictions. The Ecosystm survey says data privacy regulations limit AI adoption for 67% of respondents in India and Singapore in the broader Asia Pacific survey. This figure is 54% for Hong Kong financial services organizations specifically.

An executive at the roundtable expressed the dilemma: How do you design systems when governments require data localization, but AI training requires centralized computing resources, preferably in a single location with the engineering talent to support it?

The honest answer, Samuel shared, is that you can’t completely resolve this dilemma. “It was created for a number of reasons by governments,” Samuel admitted. “Sometimes it’s for national security, but other times it’s just to try to retain more jobs.” The strategy is more about minimizing risks than eliminating them: careful classification of data to identify what can move, combined with a regional architecture that optimizes local regulations.

The strategic solution: a global scale thanks to partnership

Overcoming these challenges cannot be done alone, even if you are a supplier. And that’s why the partnership between Digital Realty and BT International is important and reflects a broader industry trend.

The logic is simple: Digital Realty is the world’s largest data center provider, operating a global platform of 300+ data centers across 6 continents, 25+ countries and 50+ metropolises on its DIGITAL® platform. It also offers high-density colocation in 28 markets across 3 regions. BT is the world’s most trusted connector of people, machines and devices, serving multinational customers in 180 countries and creating the first truly global, AI-ready telecommunications platform. Together, they can deliver what Samuel calls “connected cloud colocation”: the ability to seamlessly connect across data centers, clouds, and networks through a unified architecture.

Digital Realty is also a recognized leader in IDC MarketScape: Assessing Global Data Center Colocation Service Providers for 2025.

“Customers now have to connect to many locations,” Samuel explained. “This could be other data centers, a competing data center, or even different clouds.” Digital Realty offers some connectivity in the clouds, “but we don’t have the breadth of a network, and that’s where BT comes in.”

The commercial benefits are substantial. Dykes cited a Hong Kong client who saved around £1 million a year using this architecture. It was not just about operational efficiencies, but also capital savings through reducing its data center footprint and connectivity.

The hybrid path to modernizing new AI workloads

For organizations using existing infrastructure, especially those that have invested heavily in their own data centers, the question becomes: what to modernize in-house or what to outsource?

The answer is hybrid and non-binary. “This is not about rejecting what you have done in the past,” Samuel emphasized. Significant modernization must take place within the existing infrastructure: rearchitecting applications, organizing data flows and implementing appropriate classification.

But for new AI workloads, especially those requiring high power density? “It’s probably not going to be possible in your traditional data center. It’s just not capable of supporting it.” The model becomes incremental: keep existing applications on existing infrastructure while placing new AI calculations in commercial facilities equipped to meet modern requirements.

It’s a similar approach to networking. The complete network structure required to fully support distributed AI workloads remains two to three years away. Legacy MPLS networks are expected to be phased out by 2029. In the meantime, organizations face a gradual transition.

“You don’t need to take a big-bang approach,” Dykes advised. Start by gaining visibility into current network flows. Add network-as-a-service capabilities selectively where flexibility is most important. Gradually migrate workloads toward optimized paths that balance performance, durability, and cost.

More importantly, the window for strategic positioning is currently open as markets debate the virtues of advanced AI. But organizations that wait risk finding themselves facing infrastructure constraints precisely as their competitors begin to develop their AI capabilities. Indeed, the real AI revolution lies not in models but in infrastructure. It’s also an issue that technology leaders cannot afford to overlook.

Image credit: iStockphoto/Gang Zhou

Leave a Reply

Your email address will not be published. Required fields are marked *