Computing first networking – load balancing for the edge era • The Register





A new buzz phrase crossed your correspondent’s desk: “Computing First Networking.”

WTF is it?

According to a letter from analyst firm Gartner, it is “an emerging network solution that provides services for computing tasks by sharing compute resources from multiple edge sites” and “a new type of decentralized computing solution that optimizes the efficiency of edge computing.”

But why does edge computing need more efficiency? For the last few years, we’ve been told that edge solves your problems by moving the computer closer to where the data is created, so you can work on that data — and then act on the outcomes of that work at the edge — without that you need to do that. schlep data to a cloud or data center for processing. Or pay for that lugging.

Reality check: Gartner thinks that computing resources at the edge may not have enough capacity to handle all the work they need to do, so you need to find other resources to get the job done. Maybe even other sources on the brink.

cunning at this point reg readers will probably have thought to themselves that allocating workloads to available resources is exactly the kind of task that load balancers can perform.

Sorry. Gartner thinks that load balancers are not built to work with advanced resources, as they often run containers and serverless workloads, which means that usage rates change very quickly in the “MECs” – the Multi-access Edge Computing sites with a router, servers, power, and cooling – on the edge.

Enter Computing First Networking (CFN), described by Gartner’s Owen Chen as “a new type of decentralized computing solution [that] uses both computer and network health to help determine the optimal edge between multiple edge sites with different geographic locations to meet a specific edge computing request.”

CFN does this using dynamic anycast (dyncast), which Gartner describes as “a distributed technique that follows the idea of ​​resource pooling to transfer service requests to the optimal [edge] site in a dynamic way.”

“Instead of measuring the common metrics such as CPU/GPU/Memory usage of a MEC site, dyncast uses a compute-aware module called station demon to get the compute state in application granularity. This helps calculate a compute metric that reflects the workload of a particular application deployed on the MEC site.”

Gartner gives the example of enabling autonomous cars as one application for CFM.

The company envisions MEC sites taking on the task of collecting and processing traffic information so driverless cars know what to expect on the road, as well as playing a role in streaming video in cars to improve their performance. entertain passengers. Analyzing traffic is clearly the most important job a MEC can perform, so Gartner imagines CFM would find other sources that aren’t busy and therefore take over the job of streaming video.

And that resource will probably be up for grabs, because in a well-wired city, there will be MECs on plenty of 5G base stations. At the end of the working day, when CBDs and industrial parks run empty, the MECs there should be ready to take on some work, while the MECs next to highways have reached their maximum capacity.

CFN pops up in papers at sloppy conferences, but didn’t feature on Gartner’s 2021 Hype Cycle for edge computing. So it’s probably not something you should implement quickly. However, it’s a sign that doing edge well isn’t going to be easy by implementing the edge-centric servers and software overlays that have been announced every other week lately. ®




Leave a Comment

x