6.4 oneM2M/MEC support of swarm computing
Swarm Computing refers to the coordination of multiple MEC/oneM2M instances to perform distributed computing tasks, leveraging the capabilities of edge devices and networks. In this paradigm, individual MEC/oneM2M nodes act like members of a swarm, each contributing processing power, storage, connectivity, or sensing capabilities to achieve collective goal. The system operates in a decentralized and adaptive manner, where tasks can be dynamically partitioned, distributed, and recombined across nodes depending on resource availability, network conditions, and application requirements. This enables resilient, scalable, and low-latency processing, as tasks are executed closer to the data sources and devices while ensuring cooperative load balancing, fault tolerance, and energy efficiency. When mapped onto MEC/oneM2M interworking, distributed instances could be implemented as Application Entities (AEs) hosted on Middle Nodes (MNs) or Application Dedicated Nodes (ADNs). In this scenario, local coordination within a swarm could be supported by nearby Common Service Entities (CSEs). In ETSI MEC terms, there can be MEC Services or MEC Applications which provide contextual information aggregated from other swarm nodes. In this clause, the entities involved are described as follows:
-
Swarm Agent (Local/Edge): is a software entity running on a device, MEC host, or oneM2M Application Entity that performs local computation and participates in cooperative swarm behavior. It executes subtasks of a global distributed task, shares local state, sensor data, or intermediate results with peers. The local Swarm Agent may run on constrained devices or ADNs, focusing on lightweight processing and sensing, while the Edge Swarm Agent may run on MEC hosts or MNs, handling more complex subtasks and acting as a bridge to cloud/federated nodes.
-
Swarm Entity: is any participating user or platform (e.g., device, MEC host, AE, MN, or ADN) that contributes computing, storage, or sensing resources to the swarm. It executes assigned subtasks and exchanges state updates with other nodes via oneM2M group communication or MEC APIs.
-
Swarm Collector: software (e.g., CSE or MEC host) that aggregates results from multiple swarm nodes and produces a unified output. It collects partial results from distributed swarm nodes, performs aggregation (e.g., fusion of sensor data), and provides the final output back to the orchestrator, applications, or end-users.
-
Swarm Orchestrator: is the central coordination entity of the swarm that manages task distribution, resource allocation, synchronization, resilience, and policy enforcement. It splits global tasks into subtasks, assigns subtasks to swarm nodes based on capacity, connectivity, and energy, detects node failures and reassigns tasks, and ensures QoS, latency, energy, and fault-tolerance goals. It may operate centrally (e.g., cloud IN-CSE) or decentrally (e.g., at MEC/MN-CSE) depending on deployment.
Consistent with the deployment options described in Clause 5, swarm computing may be implemented through the following options:
-
Option 1: In this configuration, Swarm Entities ask Swarm Agents to perform local tasks and interact with a Swarm Collector located nearby at the edge. The Swarm Orchestrator coordinates task assignment and resource allocation. Swarm Agents can execute subtasks locally and return intermediate results to the Swarm Collector, which consolidates outputs and sends them back to the Swarm Orchestrator. The Swarm Orchestrator and the Swarm Collector may be co-located or deployed separately on the edge. This option emphasizes low-latency collaboration between distributed Swarm Nodes and is well-suited for scenarios where tasks require real-time responses and localized aggregation as illustrated in Figure 6.4.1.

Fig 6.4.1 Swarm Computing Implementation - Option 1
- Option 2: Swarm Entities could be resource-constrained devices running lightweight processing. These devices can only execute simple tasks and rely on the Swarm Agents for heavier processing and the Swarm Collector for results aggregation. The Swarm Orchestrator manages tasks’ distribution and offloading policies, ensuring that constrained Swarm Entities are not overloaded. The Swarm Collector aggregates results from multiple Swarm Agents (e.g., deployed on the edge, namely Swarm Agent 1 and Swarm Agent 2) in charge of processing of the different sub-tasks, and returns consolidated outputs. This result is then shared with the Swarm Orchestrator as well as with the Swarm Agent that sent the original request. Finally, the initial Swarm Agent (Local) performs post-processing operations according to the requirements of the initial request, as shown in the Figure 6.4.2. In this option, the Swarm Collector and the Swarm Orchestrator may be co-located on the edge platform..

Fig 6.4.2 Swarm Computing Implementation - Option 2
- Option 3: This option extends orchestration beyond the edge by combining local processing with cloud-level intelligence. Swarm Entities interact with Swarm Agents at the edge (close to the devices) to perform latency-sensitive processing. In addition, the Swarm Collector, located in a more centralized or cloud domain, executes high-level analytics such as large-scale pattern recognition or long-term optimization using datasets that have been previously collected. The Swarm Orchestrator ensures synchronization between local results coming from the Swarm Agent at the edge and global insights from the cloud, as illustrated in Figure 6.4.3. The Swarm Collector (Global) receives the data produced by the Swarm Entity via the Swarm Orchestrator, and then performs application-specific pattern recognition which is shared with the Swarm Entity via the Swarm Orchestrator and the Swarm Agent. This hybrid model leverages the strengths of both edge and cloud: low latency at the edge and high computational power in the cloud, enabling scalable and adaptive swarm intelligence. This option is more application-specific because it divides intelligence between cloud and edge based on the workload. While Options 1 and 2 mainly vary by how close computation is to devices or how much is offloaded, Option 3 requires a split between real-time behavior at the edge and high-level reasoning in a cloud. Therefore, the placement of functions depends on the type of analytics, latency constraints, and data-handling rules of each application.

Fig 6.4.3 Swarm Computing Implementation - Option 3
To allow multiple MEC/oneM2M nodes to collaboratively perform distributed tasks, the orchestration mechanisms should take into account the following aspects:
-
Task Decomposition: a global computation task is divided into smaller subtasks. The orchestrator distributes these subtasks to participating MEC/oneM2M nodes based on their declared processing and storage capacity, network connectivity, or CPU/GPU availability.
-
Synchronization: appropriate synchronization mechanisms (e.g., publish/subscribe over oneM2M and MEC APIs) should ensure swarm nodes operate on consistent views of data. To maintain consistent swarm knowledge across federated nodes, a distributed state synchronization is required.
-
Task offloading: the orchestration mechanism should support task offloading policies where swarm members dynamically delegate high-complexity processing to nearby MEC/oneM2M nodes.
-
Resilience: in case of node failures, the orchestrator dynamically reallocates unfinished subtasks to other swarm nodes.
-
Coordination and Aggregation: results from swarm nodes are aggregated at a designed MEC host, IN-CSE or MN-CSE which acts as a collector node. The oneM2M’s group communication primitives could be used to coordinate actions across swarm members, while MEC APIs could be used to optimize routing and QoS for tasks exchanges.
The swarm computing orchestrator is the coordinator, load balancer, and fault manager of swarm computing. It does not always execute tasks itself, instead it manages how, where, and when the tasks are executed, ensuring that the swarm MEC/oneM2M nodes behave like a single resilient and adaptive system.