Repository logo

Task Management Optimization in Vehicular Edge Computing

Loading...
Thumbnail ImageThumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Université d'Ottawa | University of Ottawa

Creative Commons

Attribution 4.0 International

Abstract

Vehicular Edge Computing (VEC) is a specialized extension of Mobile Edge Computing (MEC) designed to support real-time processing in intelligent transportation systems. It enables vehicles to offload computationally intensive tasks to nearby Roadside Unit (RSU)s equipped with MEC servers, reducing latency for time-critical applications such as autonomous driving, intelligent traffic control, and Vehicle-to-Everything (V2X) communication. However, the mobility of vehicles and the short coverage duration within an RSU’s range present significant challenges for task offloading and scheduling. The growing volume of tasks from multiple vehicles can cause congestion, delays, and task drops, undermining overall system performance. To establish a performance benchmark, we first employ deterministic algorithms, including First-Come, First-Served (FCFS) and Shortest Deadline First (SDF), which are simple and fast methods for real-time task offloading. To explore more efficient scheduling, we then apply a metaheuristic method called Particle Swarm Optimization (PSO). This optimization approach is first evaluated in a static scheduling environment, in which the algorithm is executed only once on all tasks, without considering their real-time arrivals, and schedules them all at once. FCFS executes tasks in arrival order without prioritization, while SDF improves performance by prioritizing tasks with shorter deadlines. PSO achieves the best performance in this static environment because execution time and additional waiting times caused by the static nature are ignored. This method is called Offline Static PSO (Off-Sta-PSO), and it provides the theoretical upper bound. By considering the real-time execution and waiting times introduced in this scenario, we also establish the lower bound case, called Online Static PSO (On-Sta-PSO), which yields the worst performance. Recognizing the limitations of task offloading, we also propose a task partitioning approach. Some portions of the tasks are offloaded to RSUs, while the remaining parts are processed locally by onboard vehicle processors, leading to reduced latency and fewer dropped tasks. Moving to real-time task offloading, while dynamic PSO enhances task scheduling, its high computational cost and long convergence times limit its real-time viability. To address this, we introduce Online Dynamic Cost-Driven Algorithm (On-Dyn-CDA), a novel real-time scheduling algorithm. Unlike PSO, it operates in milliseconds, adapts to vehicle mobility, congestion, and RSU load, and requires no pre-training. On-Dyn-CDA surpasses Dynamic PSO by 3.42% in task loss, reduces latency by 29.22%, and executes in just 0.05 seconds under the most complex scenario, compared to 1330.05 seconds required by Dynamic PSO. Finally, we compare online PSO with Deep Reinforcement Learning (DRL) methods like Deep Q-Network (DQN) and proximal policy optimization (PPO). Although Reinforcement Learning (RL) is grounded in the Markov Decision Process (MDP), which assumes a stationary environment, VEC systems are inherently dynamic due to vehicle mobility, changing task arrivals, and variable RSU associations. To address this mismatch, our framework introduces a decision window mechanism that segments incoming tasks into locally stable intervals, allowing the RL agent to operate under near-stationary conditions. Additionally, we design adaptive reward functions that guide the agent to minimize both task drops and end-to-end (E2E) latency, based on real-time task characteristics and server availability. The state space includes dynamic context such as MEC server availability and task deadlines, enabling informed and responsive decision-making. Our DQN and PPO models are trained on diverse mobility traces and evaluated in unseen environments, demonstrating strong generalization without retraining. Despite the non-stationarity of VEC, this design enables robust online scheduling, with DQN outperforming dynamic PSO in both latency and task reliability. DQN substantially reduces execution time, completing in only 10.62 seconds, lowers dropped tasks by 2.5%, and decreases E2E latency by 18.6%. Compared to PPO, DQN achieves a 57.1% reduction in execution time, along with a 5.7% decrease in E2E latency and a 1.7% reduction in dropped tasks. The results demonstrate the effectiveness of this research in addressing the core challenge of real-time task scheduling in VEC systems.

Description

Keywords

Mobile Edge Computing, Vehicular Task Offloading, Optimization, Deep Reinforcement Learning

Citation

Related Materials

Alternate Version