Google's UniSuper snafu was a public cloud warning to telcos

The public cloud services provided by Amazon, Microsoft and Google look wholly unsuited to much of the telco network and carry huge systemic risk.

Iain Morris, International Editor

May 29, 2024

5 Min Read
View of Google's Bay Area campus
Google (campus pictured) is among the hyperscalers synonymous with the public cloud.(Source: Iwan Baan)

Earlier this month, Google accidentally deleted a $125 billion Australian pension fund called UniSuper, like a hungover teacher fumbling the wrong key and fatally erasing a student's essay. The fund and all its contents were eventually rescued from the cloud dumpster, but for a week about half a million UniSuper members would have felt like they were in one of those American thrillers about the NSA or some other secretive agency, where the lead character's identity or lifesavings mysteriously disappear from all computer records, and they effectively cease to exist.

"What if that pension fund had been a 5G core?" is the question some telcos are inevitably asking. In cases where operators have moved all their mobile traffic onto that core, and it's the only one they have, this would return a swathe of the population to the 1980s – hopefully minus the perms and shell suits – when there were payphones with queues outside, and thumbs denied a daily regime of touchscreen tapping were less gymnastic. Judging by the GDP value ascribed to mobile telecom, it would be an economic catastrophe.

Fear of this mobile meltdown partly explains some telco and government aversion to the public cloud services offered by Amazon (AWS), Microsoft (Azure) and the UniSuper-throttling Google (Google Cloud Platform). In the few examples of core network deals between telcos and public clouds, applications are hosted partly inside telco facilities rather than public cloud data centers. AWS and Telefónica insist on calling an arrangement of this nature a "public" cloud deal, but Microsoft uses "hybrid" cloud to describe its Nexus-branded equivalents.

Economically, the public cloud is attractive because of its scale and because infrastructure is shared. The idea is that customers pay for only what they use (hence the "as-a-service" label) and effectively split server and energy costs with other companies using the same facilities. But these benefits disappear once the AWS, Microsoft or Google technologies are installed in a telco's own premises (unless, unconventionally, this is being shared), paid for by the telco.

The loneliness of the long-distance RAN

The trouble for the hyperscalers is that public cloud – true public cloud, that is – looks even unlikelier for other telco workloads, and especially the radio access network (RAN), one of the biggest cost items.

Tour just about any mobile network today and you will note various boxes at the base of masts for housing IT resources. Unlike the radio units and antennas atop those masts, the baseband units in these boxes can theoretically be moved into a telco facility such as a baseband hotel (we kid you not) where they can hang out with other once-lonely units and slumber alongside them at night. They could even be replaced by servers in a hyperscaler's data center.

But this approach will probably be uncommon. For a start, any such centralization would necessitate a hefty investment in fronthaul, the linkage – usually fiber – between those radio and baseband units. For a long time, this could massively outweigh the savings promised by resource consolidation.

What's more, these baseband aggregation points (whatever you call them) would still need to be in the vicinity of end users to avoid excessive latency, a measure in milliseconds of the roundtrip journey time for a network signal. A single big hyperscaler facility would do only for users in its shadow. Repurposed central offices might work, but most telcos will probably leave some of the baseband at the mast. Vodafone UK, which is virtualizing sites as it switches from Huawei to Samsung, is leaving all the baseband there.

The platform it has chosen comes from Wind River, which competes against Broadcom-owned VMware and IBM-owned Red Hat in this subsector. An alternative is to use the RAN vendor's own platform, such as Ericsson's cloud-native infrastructure solution (CNIS). This is preferred by AT&T despite its use of Microsoft's Nexus platform for the 5G core.

One reason could be AT&T's reliance on Ericsson for just about every part of the RAN bar some radio units supplied by Fujitsu (an Ericsson 5G partner since 2018). In a white paper published late last year, Vodafone and NTT Docomo complained that some of the interfaces between these different parts were not fully open, despite the efforts of the O-RAN Alliance to develop more interoperable specifications.

O2, an interface between the virtualization or cloud platform and the service and management orchestration (SMO) platform, needed "vendor proprietary extensions," said the telcos. For AT&T, taking Ericsson's "full stack" – CNIS, for cloud, and the Intelligent Automation Platform, for SMO – conceivably minimizes the hassle.

Not so common after all

The nature of RAN virtualization is also undermining some of the arguments about cloud economics. Ideally, the RAN should be deployable on "common, off-the-shelf servers" (COTS), uniform kit racked in a data center and usable by anyone from a bank to a fast-food outlet to a retailer, besides the telco. Instead, chips and servers are being customized to handle the RAN's specific needs.

Granite Rapids-D, a forthcoming Intel product, bundles fronthaul connectivity and accelerated computing into a server. It's no longer a general-purpose processor, according to Nokia, which reckons that building all servers in such a way would add a lot of unnecessary overhead. But having multiple flavors on the menu complicates the recipe book for server companies and increasingly invalidates the COTS label.

None of this will stop public cloud advocates within telcos from charging ahead, even if they are quickly forced to compromise. The typical response to the UniSuper snafu and various outages last year is to point out that a company could just as easily (perhaps more easily) delete its own data, mess up a software change or be hacked. But this overlooks the same desire for control that explains why many of us would rather drive than be driven.

If that sounds irrational, consider instead these words penned by the Bank for International Settlements in 2022: "Growing reliance by a large number of financial institutions on technology services provided by a small number of big techs makes the continuity of those services systemically relevant. This dependency is forming single points of failure, and hence creating new forms of concentration risk at the technology services level." Dividing dozens of 5G cores between three US companies is surely the definition of systemic risk.

Read more about:

Europe

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like