Excerpted from CIR’s upcoming report on Edge Computing Processors.
Although edge computing has seen a lot of developments in recent years, it is not a new concept. “Content delivery networks” or “content distribution networks” (CDNs) have been around a long time. CDNs work in the same way as edge computing—a geographically distributed network of proxy servers and their data centers, with the goal to provide high availability and performance by distributing the service spatially relative to end users.
CDN, however, only focuses on simple delivery of content such as images and texts, while edge computing is about processing and performing complex calculations at the edge or near to the source. The more advanced nature of true edge computing involves data processing and analysis of latency-sensitive traffic at the edge of the network; that is close to edge devices. Non-critical data will be sent back to a cloud or core server for processing and storage. CIR sees edge computing as the next big thing in datacom after cloud computing. As such, the edge computing market is witnessing growing interest from OEMs, systems integrators, service providers, venture capital and increasingly chip firms. Very broadly speaking there are three types of processor used in edge computing. These are shown in the following table.
Types of Edge Computing Processors
|Standard CPUs||Similar or identical with CPUs and similar chips used in small computers and mobile devices||Previous generations of edge devices, edge servers, etc.||Mainstream processor firms|
|General-purpose edge processors||High-speed processors with emphasis on speed and ability to deal with latency||Next generation of edge device||Interest from mainstream processor firms with some OEMs bringing design in house|
|Applications specific||Designed for IoT, autonomous cars, AI||Edge communications for specific applications||Interest from mainstream processor firms, some OEMs bringing design in house, and involvement of applications community|
Structure of the Edge Network and Edge Traffic
While the terms “edge cloud” and edge computing are often used interchangeably, they are different. Edge computing involves all parameters of the infrastructure, that is, hardware and devices along with software and services, while edge cloud refers to the distributed cloud or cloud infrastructure at the edge of the network that will compute and store critical data. Edge cloud is composed of data centers that are located at the local level or micro data centers or can be edge PoP. These data centers will be as close as possible to the endpoint with the aim of reducing latency and increasing quality of experience (QoE). In other terms, edge computing can be defined as edge cloud combined with content delivery networks (CDN).
It is important to note that the concept distributing resources to decrease latency and improve QoE is not new. The sole reason for the existence of CDN is to decrease latency and improve performance. The difference between CDN and edge computing, however, will be edge cloud, which unlike CDN, will not only focus on simple delivery of content like images and scripts, but do processing as well. Similar to CDN, the main focus of edge computing is to bring compute closer to the source to reduce latency and increase QoE. While micro data centers will play a key role in edge computing, device edge will play a significant role as well
The latency-sensitive aspect of the traffic in an edge network is critical to the definition of edge computing/networking. This can be (and frequently) is accommodated with conventional processors designed into edge severs, gateway and data center products in a way that reflects the latency traffic patterns associated with edge processing. The need in edge processing is providing processing and storage closer to users and “things.” This in turn, means that edge processing must supply high-speed at a relatively low cost. In addition, to varying degrees edge processors must have a focus on supply privacy and security.
The Chip Firms Respond to the Edge Opportunity
Designs for the new generation of edge chips will come from the OEMs, chip design firms and (what is more or less the same thing) fabless manufacturers. The actual manufacturing of the chips will be done as always by foundries, such TSMC. We believe that – as a result of the emergence of the edge — there will be important instances of OEMs acquiring design firms and fabless “manufacturers” and, in fact, this is already beginning to happen.
Bringing the design of edge computing processors in house provides new opportunities for creating competitive advantages for OEMs. With the design of the edge chips in house, edge OEMs can protect their edge servers, edge routers, edge gateways and other edge data center products to a greater extent than they could otherwise do. However, for low-end products (e.g., small business edge routers) bringing designs in house may hardly be worth it.
Custom silicon for OEMs: Therefore, outside of commoditized products the key trend that we observe in the edge processors/ chips segment is that OEMs are using custom silicon rather than buying their chips through the traditional channels. One example is that of Amazon, which has shifted from Nvidia chips for its Alexa to its in-house developed Inferentia chips. According to early tests, Inferentia clusters deliver the same results as Nvidia’s T4 chips, but at 25% lower latency and 30% lower cost. Note here that it is higher performance and lower costs that add the value, not additional capabilities.
Finally, we note that there are a growing number of fabless manufacturers with a niche focus on the edge. These include Hailo, Deep Vision, Cambricon, Syntiant, Blaize, Mythic, among others.
Chips for Specialist Edge Environments
Edge computing processors may also be adapted to particular environments where edge is especially widely deployed – IoT, AI or autonomous cars for example. That is, unlike the general run of silicon the value in these chips is created by new/specialist capabilities.
AI chips: The trend towards specialist aspects of edge processors has gone surprisingly far in terms of their scope. Edge chips with AI capabilities in particular are expected to be a key focus area of research and development activities and investments. Consider the Amazon, AZ1 Neural Edge processor, designed to speed up Alexa’s ability to answer queries and commands by hundreds of milliseconds per response. Deep Vision’s ARA-1 inference processor for example is designed to enable AI vision applications at the edge. The company has raised $19 million.
5G and the edge: In addition, edge processors are likely to be used closely with 5G. While 4G can support edge computing, the speed required for edge computing is really possible through 5G only. 5G Provides a broadband connectivity solution that is highly suitable to edge computing in terms of availability, bandwidth and latency. Therefore, much depends on how fast and how far 5G is deployed.
CIR expects the countries that have the deepest adoption of 5G to also have the strongest edge computing market. Naturally, the U.S., which is at the forefront of 5G, is also the largest edge computing market. China is witnessing a government push for 5G and, hence, has a significantly large edge computing market. Europe is playing a catch-up game in 5G.