Cui, Zhenhuan2025-12-022025-12-022025-12-02http://hdl.handle.net/10393/51128https://doi.org/10.20381/ruor-31577Edge caching for the Internet of Vehicles (IoVs) has emerged as one of the most active research areas in computer networks, aiming to alleviate network congestion and reduce latency by deploying content caching and computational capabilities at Roadside Units (RSU). Given the high mobility of vehicles and the diversity of content requests, RSUs require caching strategies that ensure both high accuracy and content diversity. RSUs naturally act as in network caches consistent with the Information Centric Networking (ICN) paradigm. To address these challenges, we propose a proactive caching strategy named (GCN) based Recommender System and Deep Reinforcement Learning Caching (GCNR-DRL). This approach integrates a GCN-based recommender system with deep reinforcement learning to predict vehicles’ content demands and dynamically optimize caching decisions. The reinforcement learning agent captures the complex spatial temporal dynamics of vehicular networks and the intricate relationships between vehicles and content items, enabling precise cache replacement actions that significantly reduce transmission delays and mitigate network congestion during peak traffic hours. Furthermore, we extend our work by introducing an innovative method that hierarchically learns recommendation and caching strategies using an Actor-Critic reinforcement learning framework. This method couples a cache controller with a pervehicle recommender. By jointly optimizing recommendation and caching decisions, the HiCaRe-RL approach further improves cache hit ratios, reduces latency, and lowers link load compared to existing strategies. By combining predictive power with adaptive caching decisions, our methods effectively enhance content dissemination efficiency in highly dynamic vehicular environments. The ability of GNN-based models to learn latent representations of vehicular interactions, coupled with reinforcement learning’s adaptability, makes these approaches highly suitable for real world IoV scenarios. Comprehensive experimental results demonstrate that both GCN-based Recommender System and Deep Reinforcement Learning Caching (GCNRDRL) and the HiCaRe-RL method outperform traditional caching strategies. Our investigations underscore the potential of leveraging advanced GNN-based models and reinforcement learning techniques to drive efficient content placement and network resource management in dynamic IoV environments.enInternet of Vehicles (IoV)Edge CachingGraph Neural NetworksReinforcement LearningRecommender SystemsCaching OptimizationA Reinforcement Learning and Recommendation-Based Approach to Content Caching Optimization in Vehicular NetworksThesis