Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. Intelligent Data Dissemination in Vehicular Networks: Leveraging Reinforcement Learning
 
  • Details

Intelligent Data Dissemination in Vehicular Networks: Leveraging Reinforcement Learning

Source
Studies in Computational Intelligence
ISSN
1860949X
Date Issued
2025-01-01
Author(s)
Bhatia, Jitendra
Shah, Maanit
Prajapati, Rushi
Shah, Khush
Shah, Premal
Trivedi, Harshal
Joshi, Dhaval
DOI
10.1007/978-981-96-5190-0_9
Volume
1207
Abstract
Data dissemination in Vehicular Ad Hoc Networks (VANETs) is vital for the development and operation of intelligent transportation systems, as it enables the rapid and reliable exchange of critical information among vehicles and infrastructure. However, the dynamic nature of VANETs, characterised by high node mobility and frequently changing network topologies, poses significant challenges for conventional routing protocols. The major challenges of traditional routing protocols struggle with scalability, Quality of Service (QoS), and efficient data dissemination. Machine learning (ML) based traditional routing algorithms that typically rely on predefined datasets for training and can struggle to adapt to the dynamic and unpredictable nature of VANET environments. In contrast, reinforcement learning (RL) excels by learning from interactions with the environment in real-time. RL-based routing algorithms can adaptively optimize routing decisions based on the constantly changing network conditions, such as vehicle density, mobility patterns, and communication link quality. This chapter explores the potential of Reinforcement Learning (RL) to address these challenges by enabling adaptive routing protocols that dynamically adjust to network conditions. We provide a comprehensive overview of the fundamentals of RL and examine how these concepts can be applied to develop RL-based routing strategies in VANETs. Through detailed analysis and discussion, the chapter demonstrates the ability of RL to enhance the scalability, QoS, and overall performance of data dissemination in VANETs, paving the way for more robust and efficient vehicular communications in future ITS deployments.
Unpaywall
URI
https://d8.irins.org/handle/IITG2025/28415
Subjects
Deep learning | Intelligent transport system | Machine learning | Quality of service | Reinforcement learning | Software defined networking | Vehicular ad hoc networks
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify