Siun-Chuon Mau, Senior Principal Research Engineer, LGS Innovations
Cloud computing is naturally attractive to many users – it can save a lot of time and money through a shared network of remote servers that store, manage, and process data. Instead of having to build our own server networks, data storage facilities, and business or mission programs, we can use the cloud to share information and run all kinds of applications. In principle, offloading all that “IT stuff” and data storage tasks can help a company or government agency focus on their core business or mission—provided that access to cloud-based resources is fast, reliable, cost-effective, and secure.
And there’s the issue.
Any time you locate your corporate or agency resources on a third party server, you incur communications delays and interruptions (a.k.a. “latency”), potential service interruptions and other quality-of-service (QoS) issues, and increased security risks. In defense and intelligence environments, of course, these risks are unacceptable.
As the figure below shows, the conventional cloud model features a plethora of end-user devices at the network’s edge, accessing the cloud via third party servers and communications networks. “Fog computing” seeks to stretch the cloud over the user universe, so that all of those end-user and edge devices, as well as the communications networks, are IN the cloud.
In the natural world, fog is basically a cloud that has spread to the ground and encompassed its surroundings. Similarly, in fog computing, your laptops, smart phones, remote devices and vehicles, and other near-at-hand computing and communications infrastructure (such as cell towers) are sharing the load of storing and distributing data and applications, where and when they’re needed—faster and more cost effectively than relying on distant servers and communications channels, and without latency delays or interruptions in service.
Fog computing also means that network management tasks—access control, configuration, measurement, and security—all occur within the cloud as well. As the figure notes, fog computing is also known as “dispersed” computing, in that all network devices, data storage, applications, and communications are dispersed within and throughout the fog. Given the additional efficiencies and improved security this self-contained fog environment enables, fog computing is of great interest to the government, industry, and academics.
For fog computing to operate seamlessly, however, relationships among computing, networking, and applications need to be re-architected. The current Internet was built assuming the “end-to-end (E2E) model” which argues for minimalist in-network functionalities while pushing application or service specific complexities to the devices or end-points at the network’s edge. This makes less sense now, considering the structure of fog applications that disperse over a range of network elements and intermediate nodes. An example of limitations of E2E exists in mobile services, where pushing complex functions to mobile devices with unreliable network access causes problems, while redesigning the functions to disperse throughout the network alleviates them.
The Defense Advanced Research Projects Agency (DARPA) has a history of inspiring the rethinking of the relationship between computing, networking, and applications. The DARPA Active Networks Program (DANP) was an early effort in the late 90’s that questioned the E2E model. Its key idea, as summarized in the figure, is programmable network elements between the edge end-points. Today, with orders of magnitude reductions and improvements in hardware costs and capabilities, respectively, functional dispersion that forms the basis of both active networking and fog computing is becoming practical. With sufficient adoption, fog computing can become the killer app for active networking or its descendant software defined networking.
The confluence of the intellectual trajectory started by DANP and the efficiency and security benefits of fog computing has resulted in the DARPA Dispersed Computing (DCOMP) program. The program, initiated by Dr. Stuart Wagner and taken over by Dr. Jonathan M. Smith (who was a principal investigator in DANP) after Dr. Wagner finished his tenure at DARPA, aims to develop algorithms and protocols that harness geographically dispersed and possibly unreliable computing and networking capabilities to boost application performance by orders of magnitude. While there are similarities to existing fog research, DCOMP’s emphasis is more closely focused on algorithms over architectures, broadening applicability to multiple platforms (many operating in harsh and/or contested network conditions), and boosting mission awareness and prioritization capabilities.
LGS Innovations is participating in two DCOMP Technical Areas. On TA1, LGS and three subcontractors are focusing on algorithms for making computation decisions, such as to where code and where data should be located within the fog to maximize efficient and reliable execution. LGS is a subcontractor on TA2, which focuses on enhancing such code and data movements using programmable network protocol stacks.
“An initial result from our TA1 project indicates at least an order of magnitude gain over cloud-only computing for an image processing application running over a demo network,” says co-principal investigator Dr. Siun-Chuon Mau, a researcher in the LGS Cyber Solutions division. “Our algorithm decides where to assign the various subtasks in an application by reasoning over the impacts of potential assignments, given the network’s computing and networking capabilities.”
Future research areas will likely include incorporating resiliency features into this algorithm and extending it to various classes of applications, programmable nodes, and protocol stacks, as well as large-scale demonstrations.
The views, opinions, and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. government.