The Promise of the Edge

Reality not just a buzzword

The release of the second anniversary issue of InterGlobix magazine also coincides with the tenth anniversary of the publication of my book Cloudonomics: The Business Value of Cloud Computing (Wiley, 2012). 

When Cloudonomics was published, the public cloud had started to gain adoption beyond the innovators and early adopters such as Netflix and Zynga, but there were still battles between the enterprise data center faction and those that almost religiously believed that the public cloud and its hyperscale data centers described the ultimate and only possible solution. 

In Cloudonomics, I used a combination of strategy, statistics, and analogies to show that an all or nothing end-state was unlikely, and that hybrids of enterprise data centers, colocation, and public cloud were likely for many customers as an end-state, not an ephemeral step on the path to full cloud. I also described the trade-offs between centralization and dispersion, determining the optimum balance for any given networked application such as a search engine or an e-commerce site.  I call the resulting balance the hybrid multi-cloud fog: a hybrid of private and public, multiple clouds, and a spectrum of resources spanning the center to the edge.

Today, the edge is becoming a reality.  If public cloud was the defining theme of the 2010s, edge is the key priority of the 2020s.  This is largely due to a perfect storm of 1) an explosion of devices and data; 2) emerging enabling technologies;  4) economics; and 4) customer and application needs.  The edge that is emerging is much more than just having data centers in many countries or metros: it’s a node on every street corner, and in every factory or office building.

The Explosion of Devices and Data

Of the 7.9 billion people in the world, 7.1 billion use mobile phones, and 6.4 billion of these have smartphones.  This means that virtually anyone that can use a phone has one.  These numbers pale, however, compared to the total number of connected things that can be anticipated.  Today, a household may have scores of connected devices, including smart TVs, smart outlets, smart thermostats, smart speakers, smart meters, smart appliances, not to mention phones, PCs, laptops, and tablets, and connected cars, soccer balls, pacemakers, security cameras, alarm systems, and the like.  And that is just residential applications: it is easy to see how to get to a few hundred billion or even a trillion connected devices once you add in factories, farms, transportation, and other industry verticals, smart cities, and even military applications.

Some of these devices are low data rate, e.g., connected water meters that report the number of gallons used every 15 minutes.  However, at the other extreme are devices that are spewing out massive amounts of data.  Frame rates are increasing from 30 to 60 to 120, 240, or even 300 fps, and video device resolution is increasing from 1080P and 4K (3840×2160 pixels) to 8K UHD (7680×4320) to 16K Full Dome (16384×16384, or almost 270 megapixels).  Other emerging technologies, such as real-time electro-holographic displays, will need ridiculously high data rates, and brain-computer interfaces will need over 500 megapixels per image just to support the current human visual field.

The exponential increase in data coupled with the exponential increase in devices means that it often isn’t economically or technically feasible to transport this data to a distant location for processing within the timeframes applications need.

Emerging enabling technologies

However, there is a broad range of emerging technologies that can enable all this data to be locally transported, stored, processed, and distributed.  5G millimeter wave, for example, should offer relatively low latencies of less than five milliseconds for one hop across a radio access network, together with peak data rates of 10 to 20 gigabits per second. Wi-Fi 6 has similar capabilities, and 6G and beyond will further improve performance.  Moreover, the declining cost and form factor for processors, storage, and networks mean that substantial capacity can be placed on every street corner, compared to the early days when buying a computer was a board level decision and included the cost of the glass house data center. However, low latencies and higher bandwidth come at a great cost: radically reduced propagation distances.  AM and FM radio signals can travel tens of miles; 3G and 4G could travel a few miles; but 5G mmWave can only travel about a quarter mile, and that’s assuming that nothing is blocking its path. This fact alone means that any edge computing equipment collocated with 5G mmWave base stations will need to be located on every street corner.

Numerous other technologies are coming together to realize the promise of the edge: autoconfiguration and geolocation to enable rapid deployment and updating of meshed edge nodes; microservices to allow brief, real-time interactions, for example, as a car drives down a street but needs collision and traffic information; and crypto/tokens to enable micro-transactions and micro-payments for services such as those as they are rendered.


There are a number of subtle and not-so-subtle tradeoffs between the costs of processing at the edge or in the cloud.  As an example of the former, statistical multiplexing effects lead to lower costs for a unit of work when independent workloads use shared resources; your car probably gets less use on an average day than a taxi or Uber does.  The cloud—home to a million workloads—has great economics from sharing. Interestingly, these benefits almost all show up at a relatively small number of workloads. Thus, it makes sense to provide shared capacity at the edge. Moreover, by storing and processing data at the edge, the immense, not-so-subtle cost—not only in time, but in dollar-of moving it to a remote cloud is eliminated, as are any data ingress/egress fees.

Customer and application needs

Naturally, in addition to straightforward cost-benefit/ROI economics, there are many reasons for processing at the edge, ranging from real-time response needs for autonomous vehicles or augmented reality to local collaboration, e.g., collision avoidance, privacy and data sovereignty, survivability, etc.  Clouds can, do, and will go down, and networks experience systemic failures and fiber cuts, cable cuts, or congestion on routes.  Edge can help increase availability and continuity.

For these reasons, virtually any customer or application can benefit from the use of edge.  Consider augmented reality, whether in a professional setting such as a hospital for surgery, or a factory for equipment maintenance, or a consumer setting such as a retail experience.  Such applications need to acquire a video stream, use complex image processing to identify objects, whether they be “dog” or “Bill Smith from 10th grade algebra class”, access external information, and possibly determine or suggest actions, all within a few hundred milliseconds or much less.  Adding in 100 to 200 milliseconds for WAN transport is infeasible.  Or, consider vehicle interactions with other vehicles, roadside infrastructure such as toll systems, digital signage, and traffic lights, and others, such as pedestrians, bicyclists, animals, etc.  Such activities can’t be processed in the endpoint, such as a smartphone or car, nor is there the time or budget to process them in the cloud, hence the critical importance of the edge. Enterprise data centers, colocation, and interconnection facilities, and hyperscale clouds are not going away, they are just welcoming the new kid on the (literal) block to help build the architectures of this decade.