X
Business

Where is the ‘edge’ in edge computing? And who gets to decide?

There are now so many “edges” in data center deployments that it’s getting harder to find the “center” in the data center. Despite there being a lot of contention, nobody yet has an edge on defining “the edge.”
Written by Scott Fulton III, Contributor

Calling a technological domain "the edge" gives it a cool sound, like it's just pushing the boundaries of some innovative envelope. So naturally, there are multiple subdomains of the world's wireless network that operators and equipment providers have staked out as "the edge." There is a "network edge" that you'd think would extend to the furthest boundaries of its coverage areas. Actually the "network edge" can be inches away from the wireless core, if the functions being served there extend directly to the customer.

datwyler-mini-data-center-model.jpg

An edge-ready mini data center as envisioned by cabling solutions provider Datwyler.

Then there's the "customer edge," and if you're not confused yet, that should be the outmost frontier of a customer's own assets. There is a "cloud edge" and an "edge cloud," which some vendors say is the same thing, and others may construe as totally separate concepts.

The reason for the ambiguity is this: The future of both the communications and computing markets may depend on the shape these edges form once they are finally brought together. This will determine where the points of control, and where the points of access, will reside. And as 5G Wireless networks continue to be deployed, the eventual locations of these points will determine who gets to control them, and who gets to regulate them.

Land grab

"The edge is not a technology land grab," remarked Cole Crawford, CEO of µDC producer Vapor IO. "It is a physical, real estate land grab."

vapor-io-fullchamber-800-px.jpg

A Vapor Chamber, designed in collaboration with the Kinetic Edge Alliance and produced by Vapor IO.

Vapor IO

As ZDNet Scale reported in November 2017, Vapor IO makes a 9-foot diameter circular enclosure it calls the Vapor Chamber. It's designed to provide all the electrical facilities, cooling, ventilation, and stability that a very compact set of servers may require. Its aim is to enable same-day deployment of compute capability almost anywhere in the world, including temporary venues and, in the most lucrative use case of all, alongside 5G wireless transmission towers.

Since that report, public trials have begun of Vapor Chamber deployments in real-world edge/5G scenarios. The company calls this initial, experimental deployment schematic Kinetic Edge. Through its agreements with cellular tower owners including Crown Castle -- the US' largest single wireless infrastructure provider, and an investor in Vapor IO since September 2018 -- this schematic has Vapor IO stationing shipping container-like modules with cooling components attached, to strategic locations across a metro area.

By stationing edge modules adjacent to existing cellular transmitters, Vapor IO leverages their existing fiber optic cable links to communicate with one another at minimum latency, at distances no greater than 20 km. Each module accommodates 44 server rack units (RU) and up to 150 kilowatts of server power, so a cluster of six fiber-linked modules would host 0.9 megawatts. While that's still less than 2% of the server power of a typical metropolitan colocation facility, from a colo leader such as Equinix or Digital Realty, consider how competitive such a scheme could become if Crown Castle were to install one Kinetic Edge module beside each of its more than 40,000 cell towers in North America. Theoretically, the capacity already exists to facilitate the computing power of greater than 700 metro colos.

180415-vapor-io-kinetic-edge-deployment-map.jpg
Vapor IO

"As you start building out this Kinetic Edge, through the combination of our software, fiber, the real estate we have access to, and the edge modules that we're deploying, we go from the resilience profile that would exist in a Tier-1 data center, to well beyond Tier-4," said Crawford, referring to the smallest and largest classifications of data centers, respectively. "When you are deploying massive amounts of geographically disaggregated and distributed physical environments, all physically connected by fiber, you now have this highly resilient, physical world that can be treated like a highly connected, logical, single world."

Vapor IO has perhaps done more to popularize the notion of cell tower-based data centers than any other firm, particularly by spearheading the February 2019 establishment of the Kinetic Edge Alliance. But perhaps seeing a startup seize a key stronghold from its grasp, AT&T has recently backed away from characterizing its network edge as a place within sight of civilian eyes. In a 2019 demonstration at its AT&T Foundry facilities in Plano, Texas, the telco showed how 5G connectivity could be leveraged to run a real-time, unmanned drone tracking application. The customer's application in this case was not deployed in a µDC, but instead in a data center that, at some later date, may be replaced with one of its own, existing Network Technology Centers (NTC).

It's AT&T's latest bid to capture the edge for itself, and hold it closer to its own treasure chest. In response, Vapor IO has found itself tweaking its customer message.

vapor-edge-module-photo-cell-site-1024-px.jpg

A Vapor IO Kinetic Edge facility next to a Crown Castle-owned RAN tower in Chicago.

Vapor IO

"When we first started describing our Kinetic Edge platform for edge computing, we often used the image of a data center at the base of a cell tower to make it simple to understand," stated Matt Trifiro, Vapor IO's chief marketing officer, in a note to this reporter. "This was an oversimplification."

"We evaluate dozens of attributes," Trifiro continued, "including the availability of multi-substation power, proximity to population centers, and the availability of existing network infrastructure, when selecting Kinetic Edge locations. While many of our edge data centers do, in fact, have cell towers on the same property, they mainly serve as aggregation hubs that connect to many macro towers, small cells and cable head ends."

Although cell towers are a principal factor in Vapor IO's site selection, Trifiro told ZDNet, they're not the only factor.  Kinetic Edge sites are linked to one another through a dedicated software-defined network (SDN).  The resulting system routes incoming traffic among multiple sites in a region, forming a cluster that Vapor IO does not call an "edge cloud."

"In this way, we enable the Kinetic Edge to span the entire middle-mile of a metropolitan area, connecting the cellular access networks to the regional data centers and the Tier-1 backbones using a modern network topology," said Trifiro.

Precipice

The Kinetic Edge deployment model follows an emerging standard for enabling edge computing environments on highly distributed systems, for a plurality of simultaneous tenants. Last January, prior to the onset of the pandemic, the European standards group ETSI published two reports that jointly tackled the problem of virtualization -- giving each tenant a slice of an edge server -- in a way that could also serve as the foundation for telco-owned servers used in 5G Wireless.

Just as server and network virtualization provided the foundation for the modern data center cloud, these proposed standards could pave the way for a concept which, just last year, was being critiqued as oxymoronic: the edge cloud.

Network slicing is a deceptively difficult concept to implement in telco environments, many of which are already virtualized at one level. To pull it off, service providers would have to implement a second layer of virtualization at a deeper level -- one that allows telcos to utilize their servers for their own data services, while at the same time secluding and isolating customer-facing services so that they cannot peer into telcos' namespaces. There are both technological and legal hurdles for engineers to cross (many countries' regulations, including the US, prohibit the mixing of telco and customer environments), and prior to their drone tracker demo, AT&T's engineers had gone on record to say it cannot be done.

ETSI's proposed approach for what it calls multi-access edge computing (MEC) would be to refrain from specifying just how virtualization takes place.

"The ETSI MEC architectural framework. . . introduces the virtualization infrastructure of MEC host either as a generic or as a NFV [network functions virtualization] Infrastructure (NFVI)," one ETSI document [PDF] reads.  "Neither the generic virtualization infrastructure nor the NFVI restricts itself to using any specific virtualization technology."

The result is a cluster of server components, each of which may be hosted by a hypervisor-driven environment such as classic VMware vSphere, or a container-driven, orchestrated environment such as Kubernetes. The system looks homogenous enough on the surface, with applications and services being hosted, for lack of a more explicit model, however they're hosted. The lower layers of the infrastructure provide whatever isolation each tenant's workload of applications and services may require. From the perspective of the orchestrator or manager, it's all one cloud -- and that is how ETSI defines "edge cloud."

The problem with this point of view, as some US-based engineers see it, is that it assumes edge systems may be contained unto themselves, entirely at the edge. If you're a manufacturer of systems and components designed to go elsewhere, you don't want to build partitions for yourself.

"If you're going to deliver real-time inferencing at the edge, typically that means you've trained a model back in your data center," explained Matt Baker, Dell Technologies' senior vice president of strategy and planning.  "And this is one reason we say edge doesn't exist unto its own. Edge is a part of a broader environment: edge to core to cloud."

Last February, Baker was rolling out an extension to his company's edge systems architecture geared for high-performance AI and data analytics scenarios, called Dell EMC HPC Ready for AI and Data Analytics. In a system that enables its parts to be defined by the workloads it runs, said Baker, the separation of powers tends to evolve into silos. Case in point: machine learning. Bright Cluster Manager for ML may require one platform; if another workload runs better on Spark, that's another platform. The result is workload isolation and reinforced complexity for their own sake.

"So what we wanted to do is build a ready architecture for many AI and data analytics frameworks," said Baker, "so that it's just a whole lot easier for our customers to approach, deploy, and leverage all of these new, great technologies like Cassandra, Domino, Spark, Kubeflow."

What Dell is calling a system for edge computing is, in this case, a very dense server rack. At first glance, and even at second glance, it doesn't appear to fit the typical bill of an edge-optimized system, even one from Dell. Indeed, Dell EMC published earlier forms of its HPC Ready architecture, including one back in early 2018 [PDF], without any mention of edge computing. What is it that makes a server rack non-edge one year, and edge-certified the next?

"I think it's important to observe that this is an ecosystem, an end-to-end system," Baker explained.  "And in order to develop a real-time inferencing application, it typically requires that you train it against a large set of data. This is designed to complement and be deployed not physically alongside, but logically alongside, the streaming data platform."

Dell believes an edge computing platform need not be physically deployed at any edge at all. It's an edge cloud of sorts, that you don't even have to know is at the edge. In response to a question from ZDNet, Baker confirmed that this architecture was designed for an environment that is staffed by human beings — which already suggests its location is in the zone that Dell, at least in the past, called the "core."

The breakout switch

at-t-foundry-plano-texas.jpg

The AT&T Foundry facility in Plano, Texas, ironically as seen by a drone.

AT&T

For the 4G Wireless model, engineers added an ingenious type of network switch. It allowed a request to a service from a customer's device, such as a smartphone, to bypass the usual Internet routing scheme, enabling a local server to process that request more expediently. It was called the local breakout (LBO) switch, and it's the reason many major Web sites respond quickly to users, even with less-than-optimal connections.

Being able to switch an incoming data request so that a local server responds to it rather than a remote one, turns out to be a handy tool in the arsenal of a telco that wants to direct traffic from the radio access network (RAN) to wherever it considers the edge to be — the place with the most value for that telco. For AT&T, as its drone demo proved, it can enable IoT traffic to be routed into its own facilities — into what Dell would have defined as the "core," but what can now be marketed as the edge. It's a technique built on top of LBO, called serving gateway local breakout (SGW-LBO).

Athonet is a communications equipment provider that has already rolled out SGW-LBO to some telco customers, following its launch in February 2018. In a statement at the time, the company said, "The benefit of this approach is that it allows specific traffic (not all traffic) to be offloaded for key applications that are implemented at the network edge without impacting the existing network or breaking network security. . . We therefore believe that it is the optimal enabler for MEC."

There's that ETSI term again. If telcos have their hands exclusively on SGW-LBO switches, then what's to prevent them from diverting all incoming traffic from their RANs directly into their NTCs, declaring those NTCs "the new edge," and reaping the jackpots?

Juniper Networks, at least theoretically, would benefit from whichever way the LBO switch is thrown. Its CTO, Raj Yavatkar, told ZDNet he sees potential value for an AT&T, a Verizon, or a T-Mobile in embracing, or at least enabling, the Kinetic Edge model — of letting LBO point their direction. His argument is that it would free telcos from depending exclusively upon the largest hyperscale cloud service providers.

"We see that if telcos simply rely on hyperscalers to provide all these services, and only focus on providing connectivity," said Yavatkar, "they won't be able to take advantage of the value-added services that they can sell to their enterprise customers, and monetize them. There's a balance to be considered, with respect to what is served from hyperscalers, and what is served in a cloud-agnostic, cloud-neutral way, from the edge of the cloud."

StackPath, with whom Juniper has partnered, could conceivably provide not only the edge infrastructure for telco services, but also the platform for a marketplace on which those services are sold — a kind of cloud at the edge that, neither technically or commercially, is actually "the cloud."

Prospects

It would be a mistake to presume that edge computing is a phenomenon which will eventually, entirely, absorb the space of the public cloud. Indeed, it's the very fact that the edge can be visualized as a place unto itself, separate from lower-order processes, that gives rise to both its real-world use cases and its someday/somehow, imaginary ones. It was also a mistake, in perfect hindsight, to presume the disruptive economic force of cloud dynamics could completely commoditize the computing market, such that a virtual machine from one provider is indistinguishable from any other VM from another, or that the cloud will always feel like next door regardless of where you reside on the planet.

Yet it's very difficult, when plotting the exact specifications for what any service provider's or manufacturer's edge services, facilities, or equipment should be, to get caught up in the excitement of the moment and imagine the edge as a line that spans all classes and all contingencies, from sea to shining sea. Like most technologies conceived and implemented this century, it's being delivered at the same time it's being assembled. Half of it is principle, and the other half promise.

Once you obtain a beachhead in any market, it's hard not to want to drive further inland. There's where the danger lies: where the ideal of retrofitting the Internet with quality of service can make anyone lose, to coin a phrase, its edge.

This article contains updated material that first appeared in an earlier ZDNet Executive Guide on edge computing.

Learn more — From the CBS Interactive Network

Elsewhere

Editorial standards