Image: metamorworks, Getty Images/iStockphoto
In recent years, computing workloads have been migrating: first from on-premises data centres to the cloud and now, increasingly, from cloud data centres to ‘edge’ locations where they are nearer the source of the data being processed. The goal? To boost the performance and reliability of apps and services, and reduce cost of running them, by shortening the distance data has to travel, thereby mitigating bandwidth and latency issues.
That’s not to say that on-premises or cloud centres are dead — some data will always need to be stored and processed in centralised locations. But digital infrastructures are certainly changing. According to Gartner, for example, 80 percent of enterprises will have shut down their traditional data centre by 2025, versus 10 percent in 2018. Workload placement, which is driven by a variety of business needs, is the key driver of this infrastructure evolution, says the analyst firm:
With the recent increase in business-driven IT initiatives, often outside of the traditional IT budget, there has been a rapid growth in implementations of IoT solutions, edge compute environments and ‘non-traditional’ IT. There has also been an increased focus on customer experience with outward-facing applications, and on the direct impact of poor customer experience on corporate reputation. This outward focus is causing many organizations to rethink placement of certain applications based on network latency, customer population clusters and geopolitical limitations (for example, the EU’s General Data Protection Regulation [GDPR] or regulatory restrictions).
There are challenges involved in edge computing, of course — notably centering around connectivity, which can be intermittent, or characterised by low bandwidth and/or high latency at the network edge. That poses a problem if large numbers of smart edge devices are running software — machine learning apps, for example — that needs to communicate with central cloud servers, or nodes in the intervening ‘fog’. Solutions are on the way, however.
With edge computing sitting at the peak of Gartner’s 2018 Hype Cycle for Cloud Computing, there’s plenty of scope for false starts and disillusionment before standards and best practices are settled upon, and mainstream adoption can proceed. This introduction to ZDNet’s special report looks to set the scene and assess the current state of play.
Edge computing is a relatively new concept that has already been associated with another term, ‘fog computing’, which can lead to confusion among non-specialist observers. Here are some definitions that will hopefully clarify the situation.
Unlike Cloud Computing, which depends on data centers and communication bandwidth to process and analyze data, Edge Computing keeps processing and analysis near the edge of a network, where the data was initially collected. Edge Computing (a category of Fog Computing that focuses on processing and analysis at the network node level)…should be viewed as a de facto element of Fog Computing.
State of the Edge 2018
The delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services. By shortening the distance between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today’s Internet, ushering in new classes of applications. In practical terms, this means distributing new resources and software stacks along the path between today’s centralized data centers and the increasingly large number of devices in the field, concentrated, in particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure and device sides.
451 Research/OpenFog Consortium
[Fog] begins on one ‘end’ with edge devices (in this context, we define edge devices as those devices where sensor data originates, such as vehicles, manufacturing equipment and ‘smart’ medical devices) that have the requisite compute hardware, operating system, application software and connectivity to participate in the distributed analytics Fog. It extends from the edge to ‘near edge’ functions, such as local datacenters and other compute assets, multi-access-edge (MEC) capabilities within an enterprise or operator radio access network, intermediate computing and storage capabilities within hosting service providers, interconnects and colocation facilities, and ultimately to cloud service providers. These locations have integrated or host ‘Fog nodes’, which are devices capable of participating in the overall distributed analytics system.
David Linthicum (Chief Cloud Strategy Officer at Deloitte Consulting)
“With edge, compute and storage systems reside at the edge as well, as close as possible to the component, device, application or human that produces the data being processed. The purpose is to remove processing latency, because the data needn’t be sent from the edge of the network to a central processing system, then back to the edge…Fog computing, a term created by Cisco, also refers to extending computing to the edge of the network. Cisco introduced its fog computing in January 2014 as a way to bring cloud computing capabilities to the edge of the network…In essence, fog is the standard, and edge is the concept. Fog enables repeatable structure in the edge computing concept, so enterprises can push compute out of centralized systems or clouds for better and more scalable performance.”
Here’s how the OpenFog Consortium visualises the relationship between data-generating ‘things’ at the network edge, cloud data centres at the core, and the fog infrastructure in between:
Image: OpenFog Consortium
According to B2B analysts MarketsandMarkets, the edge computing market will be worth $6.72 billion by 2022, up from an estimated $1.47bn in 2017 — a CAGR (Compound Annual Growth Rate) of 35.4 percent. Key driving factors are the advent of the IoT and 5G networks, an increase in the number of ‘intelligent’ applications, and growing load on cloud infrastructure:
Edge computing market dynamics
Among the vertical segments considered by MarketsandMarkets, Telecom and IT is expected to have the biggest market share during the 2017-2022 forecast period. That’s because enterprises faced with high network load and increasing demand for bandwidth will need to optimise and extend their Radio Access Network (RAN) in order to deliver an efficient Mobile (or Multi-access) Edge Computing (MEC) environment for their apps and services.
The fastest-growing segment of the edge computing market during the forecast period is likely to be retail, says MarketsandMarkets: high volumes of data generated by IoT sensors, cameras and beacons that feed into smart applications will be more efficiently collected, stored and processed at the network edge, rather than in the cloud or an on-premises data centre.
Grand View Research takes a more conservative view, estimating that the edge computing market will be worth $3.24 billion by 2025, although that’s still a ‘phenomenal’ CAGR of 41 percent over the 2017-2025 forecast period. Regionally, North America will lead the market due to increasing penetration of IoT devices in the US and Canada, said the research firm, while the vertical segment with the highest CAGR will be healthcare and life sciences, thanks to “storage capabilities and real-time computing offered by edge computing solutions”. SMEs will witness the highest CAGR (46.5%) over the forecast period, said Grand View Research, thanks to the ability of edge computing solutions to reduce operating costs.
The most optimistic growth estimate comes from 451 Research, in an October 2017 study — Size and Impact of Fog Computing Market — commissioned by the OpenFog Consortium. This wide-ranging research puts the market opportunity for fog computing at $18.2 billion by 2022, up from $1.03bn in 2018 and $3.7bn in 2019 — a CAGR of 104.9 percent between 2018 and 2022.
Data: 451 Research & OpenFog Consortium / Chart: ZDNet
According to 451 Research, the leading verticals for fog computing in 2022, in terms of market share, will be utilities, transportation, healthcare, industrial and agriculture.
Image: 451 Research & OpenFog Consortium
When it comes to the fog computing ecosystem in 2022, 451 Research breaks down the components like this:
Data: 451 Research & OpenFog Consortium / Chart: ZDNet
Hardware components are well out in front, with a 42.1 percent slice of the 2022 pie, followed by fog applications/platforms (21.5%) and fog services (20.4%). No wonder hardware vendors and cloud application/services providers are queueing up to get involved in the fast-developing edge/fog market.
Despite their different emphases, these forecasts make it clear that the ‘perfect storm’ for edge computing is being created by a rapidly increasing number of internet-connected devices and the imminent advent of high-bandwidth, low-latency 5G networks. Ericsson’s June 2018 Mobility Report summarises the expected developments in these areas.
Whereas PCs, laptops, tablets and (to a lesser extent) mobile phones show flat growth between 2017 and 2023, IoT devices are taking off: those with wide-area connections will see 30 percent CAGR, with short-range IoT devices showing significant but slower growth (17% CAGR). This results in an almost 80 percent (79.4%) increase in the number of connected devices between 2017 (17.5 billion) and 2023 (31.4 billion):
Data: Ericsson Mobility Report, June 2018 / Chart: ZDNet
As far as 5G is concerned, Ericsson expects the first data-only devices from the second half of 2018 and the first 5G smartphones in 2019. By 2023, following the advent of third-generation chipsets in 2020, the company forecasts that 1 billion 5G devices will be connected worldwide.
Image: Ericsson Mobility Report, June 2018
The first module-based 5G IoT devices, supporting ultra-low latency communications for industrial process monitoring and control, are expected during 2020, says Ericsson.
Standards & organisations
Any new IT initiative requires standards and best practices, and the early stages are often characterised by multiple groups and consortia with different agendas (despite often significant overlap in membership). Edge/fog computing is no exception.
Fog computing, a term coined by Cisco, is backed by the OpenFog Consortium, which was founded in 2015 by Arm, Cisco, Dell, Intel, Microsoft and the Princeton University Edge Laboratory. Its mission statement reads (in part):
Our efforts will define an architecture of distributed computing, network, storage, control and resources that will support intelligence at the edge of IoT, including autonomous and self-aware machines, things, devices, and smart objects. OpenFog members will also identify and develop new operational models. Ultimately, our work will help to enable and drive the next generation of IoT.
Edge computing is promoted by the EdgeX Foundry, an open-source project hosted by The Linux Foundation. EdgeX Foundry’s goals include: building and promoting EdgeX as a common platform unifying IoT edge computing; certifying EdgeX components to ensure interoperability and compatibility; providing tools to quickly create EdgeX-based IoT edge solutions; and collaborating with relevant open-source projects, standards groups and industry alliances.
According to EdgeX Foundry, “The project’s sweet spot is edge nodes such as embedded PCs, hubs, gateways, routers, and on-premises servers to address key interoperability challenges where ‘south meets north, east, and west’ in a distributed IoT fog architecture”.
EdgeX Foundry’s technical steering committee includes representatives from IOTech, ADI, Mainflux, Dell, The Linux Foundation, Samsung Electronics, VMWare and Canonical.
There are two other industry bodies in this area: the Japan-focused EdgeCross Consortium, which was founded in November 2017 by Omron Corporation, Advantech, NEC, IBM Japan, Oracle Japan and Mitsubishi Electric; and the Industrial Internet Consortium, founded in 2014 by AT&T, Cisco, General Electric, Intel, and IBM.
What the surveys say
Futurum Research surveyed over 500 North American companies (ranging from 500 to 50,000 employees) in late 2017 to discover their position on edge computing — adoption and deployment, investment intent, and more. All respondents exerted influence on edge computing investment decisions, said Futurum, with 41.8 percent being ‘operational staff’ and 25.6 percent at ‘director, manager, team lead’ level; only 8.6 percent were classed as ‘executive, C-suite, owner, partner’ though.
Futurum reported that nearly three-quarters (72.7%) of companies had already implemented an edge computing strategy, or were in the process of doing so. Furthermore, almost all (93.3%) intended to invest in edge computing in the next 12 months:
Data: Futurum Research / Chart: ZDNet
Futurum also curates a general Digital Transformation Index, which in 2018 put 68 percent of companies in the ‘leaders’ and ‘adopters’ categories. So the fact that 72.7 percent of respondents are already investing in edge computing shows that this is a hot topic for tech-savvy businesses. However, Futurum also noted that “the eagerness of 93.3% of businesses to invest in edge computing in the next 12 months does not speak to the size of their investment”.
The positive vibes among Futurum’s respondents continued when they were asked about the importance of edge computing data streams in their business processes, with 71.8 percent describing these as ‘critically’ (22.2%) or ‘very’ (49.6%) important:
Data: Futurum Research / Chart: ZDNet
What were the key drivers of this enthusiasm for edge computing? For Futurum’s respondents, it was ‘improved application performance’, followed by ‘real-time analytics/data streaming’:
Data: Futurum Research / Chart: ZDNet
The analyst firm interpreted these priorities as a reflection of the need for operational efficiency, suggesting that the relatively low ranking for IoT strategy — often quoted as a canonical edge computing use case — “will likely increase in the coming years”.
Only 15.6 percent of Futurum’s respondents aimed to keep edge computing and cloud computing separate — a decision often driven by data and system security concerns, and a focus on compartmentalised operations, the research firm said. That leaves nearly 64 percent (63.9%) who had already deployed (28.3%) or were seeking (35.6%) combined edge/data centre analytics solutions, plus 20.5 percent who were unsure whether to combine these functions or keep them separate:
Data: Futurum Research / Chart: ZDNet
The ‘unsure’ and ‘seeking’ responses amount to 56.1 percent of the survey sample, which clearly represents a significant opportunity for edge computing providers.
The OpenFog Consortium surveyed its 61 member organisations on the state of fog computing in December 2017, finding that an impressive 70 percent of CEOs were aware of the fog computing initiatives happening on their watch.
Budgets for fog computing in 2018 were generally increasing (40%) or staying the same (51%), with just 5 percent of respondents reporting a decrease. Initiatives were primarily based in the R&D department (51%) and overwhelmingly had IoT applications as their primary focus area (70%).
Security was the number-one concern among OpenFog respondents (32%), followed by worries about early/unproven technology, interoperability and unclear ROI. The main drivers of interest in fog computing were latency and bandwidth issues. Respondents expected manufacturing, smart cities and transportation to be the top industry segments adopting fog computing, followed by energy, healthcare and smart homes.
Edge/fog computing can pull workloads away from cloud data centres, so it’s no surprise to see the cloud giants taking steps to prevent those workloads from escaping their orbit.
Introduced at Amazon’s 2016 re:Invent developer conference, AWS Greengrass builds on the company’s existing IoT and Lambda (serverless computing) offerings to extend AWS to intermittently connected edge devices.
“With AWS Greengrass, developers can add AWS Lambda functions to a connected device right from the AWS Management Console, and the device executes the code locally so that devices can respond to events and take actions in near real-time. AWS Greengrass also includes AWS IoT messaging and synching capabilities so devices can send messages to other devices without connecting back to the cloud,” said Amazon. “AWS Greengrass allows customers the flexibility to have devices rely on the cloud when it makes sense, perform tasks on their own when it makes sense, and talk to each other when it makes sense — all in a single, seamless environment.”
Image: Amazon Web Services
These are ‘smart’ edge devices, of course: Greengrass requires at least 1GHz of compute (either Arm or x86), 128MB of RAM, plus additional resources for OS, message throughput and AWS Lambda execution. According to Amazon, “Greengrass Core can run on devices that range from a Raspberry Pi to a server-level appliance”.
Introduced at Microsoft’s BUILD 2017 developer conference and generally available since June 2018, Azure IoT Edge allows cloud workloads to be containerised and run locally on smart devices ranging from a Raspberry Pi to an industrial gateway.
Azure IoT Edge comprises three components: IoT Edge modules; the IoT Edge runtime; and IoT Hub. IoT Edge modules are containers that run Azure services, third-party services or custom code; they are deployed to IoT Edge devices and execute locally. The IoT Edge runtime runs on each IoT Edge device, managing the deployed modules, while IoT Hub is a cloud-based interface for remotely monitoring and managing IoT Edge devices.
Here’s how the different Azure IoT Edge elements fit together:
With general availability, Microsoft added new capabilities to Azure IoT Edge, including: open-source support; device provisioning, security and management services; and a simplified developer experience.
In July 2018, Google announced two products for developing and deploying smart connected devices at scale: Edge TPU and Cloud IoT Edge. Edge TPU is a purpose-built small-footprint ASIC chip designed to run TensorFlow Lite machine-learning models on edge devices. Cloud IoT Edge is the software stack that extends Google’s cloud services to IoT gateways and edge devices.
Cloud IoT Edge has three main components: a runtime for gateway-class devices (with at least one CPU) to store, translate, process and extract intelligence from edge data, while interoperating with the rest of Google’s Cloud IoT platform; the Edge IoT Core runtime that securely connects edge devices to the cloud; and the Edge ML runtime, based on TensorFlow Lite, that performs machine-learning inference using pre-trained models.
Both Edge TPU and Cloud IoT Edge are at the alpha testing stage at the time of writing (September 2018).
The edge/fog computing transformation is one of those shifts in focus that happen periodically in computing — from mainframes to desktop PCs, to on-premises data centres, to cloud data centres, for example. Now we’re looking at a mix of existing elements, along with billions of smart IoT devices, bound together by an intervening ‘fog’ of gateways and nodes. Device connectivity has been a bottleneck holding back this transformation, but that’s about to get a huge boost with the advent of 5G mobile networks.
Any industry sector that can derive benefit from the timely analysis of IoT data streams — and that’s pretty much all of them — will be interested in edge/fog computing. That’s why there are huge opportunities for vendors at all levels of the technology stack — standards, networking, compute, storage, applications and services.
With ever more data being generated, processed and stored in ever more locations, issues surrounding infrastructure management and data security, privacy and governance will become even more important than they are today. Let’s hope those issues are addressed sooner rather than later.
RECENT AND RELATED CONTENT
Understanding Edge Computing
The edge is that theoretical space where a data center resource can be accessed in the minimum amount of time. The location could be in the data center, on the desktop, or wherever there’s a need.
Microsoft Azure gets new tools for edge computing and machine learning (TechRepublic)Announced at the Microsoft Ignite conference today, these new features are designed to streamline the collection, processing and analysis of large volumes of data.
VMware taps IoT to extend hybrid and multi-cloud environments to the edge (TechRepublic)
At VMworld 2018, VMware unveiled its extended edge computing strategy to better control, secure, and scale customers’ edge and IoT applications and solutions.
IT leader’s guide to edge computing (Tech Pro Research)
Companies of all sizes and across various industries are moving to edge computing to generate, collect, and analyze data so they can take immediate action on that information. This guide looks at the pros and cons of edge computing and how its real-world usage has been working out.