Edge Computing as Antidote to Remote Engineering Challenges
Cloud and edge, when used in combination, yield a novel, cost-efficient IoT deployment solution for smart products.
Engineering Computing News
Engineering Computing Resources
June 1, 2021
Of all the accolades given to the Mars Perseverance Rover, add one more: most distant successful deployment of edge computing.
Edge computing is defined as a distributed computing ecosystem that brings computation and data storage closer to the collection location. In the case of the Mars exploration, closer means millions of miles, improving on signal latency measured in minutes instead of milliseconds.
Edge computing offers local computation, making for faster decisions. Cloud computing offers fast computation of large data sets, and the ability to run complex artificial intelligence and machine learning algorithms. Working together, cloud and edge computing offer a new and cost-effective IoT deployment solution for smart products.
NASA’s Perseverance and its sidekick drone helicopter Ingenuity have to operate without direct control from Earth. Most of the data analysis is done on site, using a PowerPC 750, the CPU best known as the processor in the 1998 iMac. All data is transmitted back to Earth, but the 12-minute sending time one way makes direct control impossible.
Even before going to work exploring Mars, data was gathered from Perseverance’s descent from an array of sensors in the heat shielding. Sent back to Earth after landing, this data allows NASA engineers to upgrade heat shields and other essential landing equipment based on experience and not just simulation.
International Space Station on Edge
The edge computing environment on Mars is not the first extraterrestrial deployment of its kind. Hewlett-Packard Enterprise (HPE) and NASA are testing a new computer to run artificial intelligence routines on the International Space Station. “Spaceborne Computer-2” will allow astronauts to process data locally, in minutes instead of months as with previous low-power computing resources on board.
“The most important benefit to delivering reliable in-space computing is making real-time insights a reality,” Dr. Mark Fernandez, HPE’s principal investigator for Spaceborne Computer-2, recently told news site FedScoop. “Space explorers can now transform how they conduct research based on readily available data and improve decision-making.”
Such local-and-remote computing working in tandem is growing rapidly for more down-to-earth applications. Engineering organizations are finding benefit in shifting from datacenter and workstation-centric operations to embracing remote collaboration, using computing resources on site and in the cloud.
“Edge computing gives immediacy,” notes Nick Brackney, senior consultant for cloud at Dell Technologies. “Workflows that get pushed to the Edge have volatile data; its use is required immediately.”
Such immediacy is essential to gain consistency across the dispersed ecosystem. Applications can be operating at the remote site, making real-time decisions based on previous and ongoing deep learning neural networks that operate in the cloud.
“Real-time operations at the Edge; training and optimization at the cloud,” notes Brackney. “It is a virtuous cycle for autonomous applications.”
New technologies such as autonomous vehicles generate terabytes of data per day. To process all that data is a challenge no matter its location. 5G improves the latency issues, but there is still too much data in the device to make real-time operational decisions remotely.
“The challenge [for engineering] is to balance how much to send to the cloud and how much to process at the edge,” notes Brackney. “Each workload is different.”
At a macro level, this trend of dividing data processing between a central and a remote location is an example of what experts call “data gravity.” The new generation of computationally intensive products is the gravitational force drawing applications, services and other data just as a planet draws everything toward its center.
“The theory is that data acts like gravity,” notes Matt Trifiro, CMO of Vapor IO, a company working on what they call the Kinetic Edge, described as a wide-scale network for solving edge computing issues.
“A petabyte takes a month to send on today’s internet,” says Trifiro. Data gravity is when the application to process this data is sent to where the petabyte of data resides. “There is no one edge,” notes Trifiro. “You must be able to access the edge everywhere as one common set of infrastructure, [one in which] companies bring their technology to the common infrastructure.”
On a practical level, data gravity can become a source of ongoing contention between the information technology (IT) and the operational technology (OT) teams.
“These are all snowflake deployments; every use case is different,” notes Dell’s Brackney. “IT has its issues. The OT people who own the factory are more device oriented. To succeed at the Edge, IT and OT must come together and digitize all their workflows. This is the chasm to cross.”
Containers and Kubernetes
Two newer data technologies coming to the fore with the rise of edge devices are containers and Kubernetes. Containers are like virtual machines, but lighter in size and defined for a narrow set of capabilities. A container has its own file system and its own share of a local CPU. It is decoupled from its underlying infrastructure, allowing it to be portable across cloud and OS distributions.
Containers offer Agile application and deployment, crucial in creating IoT-enabled products. Containers decouple development from operational issues, and use the ability to create an application image when required, rather than at initial product deployment. Containers are loosely coupled in operation, and allow users to easily deploy distributed and elastic services. There is no monolithic OS stack as in the typical workstation or server.
Kubernetes is a portable, extensible open source platform for container orchestration and management. The Kubernetes platform facilitates declarative configuration and device automation. Google did the original research and now oversees work from the growing, robust Kubernetes open source community.
Kubernetes provides a way to run containers as elements in a distributed network. The Kubernetes platform takes care of deployment, service discovery, load balancing and storage orchestration. If a container fails, the Kubernetes platform can replace or isolate it. However, Kubernetes is not a complete Platform-as-a-Service offering. Instead, it is more like a sack of IT pieces for building and deploying independent operations.
Edge and the Developing World
The ideas behind edge computing and its relationship to cloud computing may seem fairly straightforward in countries with an established internet infrastructure. In countries with limited infrastructure, the national telecommunications companies (telcos) are the data providers.
“Add 5G and you have a natural last-mile answer to end users,” notes Michael DeNeffe, director of product development for Cloud at AMD.
As a vendor heavily invested in graphics, AMD sees edge computing as a great way to enable graphics intensive workflows that use virtual reality or augmented reality (AR/VR) in a more location-independent fashion.
“VR/AR in engineering workflows are awesome, but unless you are directly connected to the cloud at high bandwidth it gets dicey,” according to DeNeffe. The solution, DeNeffe says, is fast edge networks using 5G.
“Telcos are trying to figure out how to monetize this,” DeNeffe notes. “They are looking at workloads in various applications, including high-performance engineering. Companies can now hire engineers in time zones all over the world. [With edge computing] they can share data sets and take advantage of local capabilities. There is no need for centralized work.”
Edge and the Cost of Engineering
Companies have two factors in consideration regarding engineering talent, DeNeffe says: head count and cost to deploy engineers.
“Rather than a team of ten engineer[s] in California designing a product, hire 40 engineers globally collaborating over edge networks,” says DeNeffe, who adds this has a fundamental quality of “quicker time to money and more efficiency. Network availability means hiring engineers anywhere.”
How should engineering teams evaluate their edge computing needs? DeNeffe says they need to focus on the problem to solve.
“Edge brings you closer to the actual compute,” DeNeffe says. “It is a honeycomb of aspects that makes edge [computing] exciting. If you provide capability on networking or hardware, you always find people taking advantage of it for software or engineering workflows.”
Fast networking is opening use cases previously thought to be impractical, such as using VR/AR technology in remote worksites.
“When virtual reality first came out, we realized you needed a direct connection to a computer or an extremely fast network,” says DeNeffe. “Use cases broke down. But now they are picking up again thanks to fast networking.”
More AMD Coverage
More Dell Coverage
More Hewlett Packard Enterprise Coverage
About the Author
Randall S. Newton is principal analyst at Consilia Vektor, covering engineering technology. He has been part of the computer graphics industry in a variety of roles since 1985.Follow DE