This Is Auburn

Show simple item record

Development of a Reinforcement Learning Environment for Connected Autonomous Vehicle Performance Optimization and Security Analysis


Metadata FieldValueLanguage
dc.contributor.advisorQin, Xiao
dc.contributor.authorMcDaniels, Corey
dc.date.accessioned2025-05-07T18:17:28Z
dc.date.available2025-05-07T18:17:28Z
dc.date.issued2025-05-07
dc.identifier.urihttps://etd.auburn.edu//handle/10415/9788
dc.description.abstractThe global presence of Connected and Autonomous Vehicles (CAVs) has seen an exponential increase over the last decade. This presence ranges from researchers testing their performance capabilities to studies assessing new security dynamics brought by CAVs. However, due to the costs of purchasing and maintaining CAVs, as well as in-field safety concerns related to safety precautions for test subjects, there is a need to have a flexible cost-effective tool for analyzing CAV impacts. This is the focus of this thesis. A literature search of the current state-ofthe art in autonomous-vehicle-centric simulators is provided in chapter 2. It was found that by better understanding machine learning paradigms – namely, various reinforcement learning (RL) methodologies – CAV performance and security assessment can be enhanced through continuous action space adaptation. Various RL algorithms can be beneficial for environmental exploration and optimization. After conducting an in-depth literature review and brief creation of a prototype reinforcement learning environment, Proximal Policy Optimization (PPO) was selected for its proficiency in both discrete and continuous action spaces. It was found that PPO provided better precision in high-speed driving actions with stochastic considerations than value-based deterministic methods and showcased the approximation of Actor-Critic Neural Network Architectures. PPO driven vehicle agents were trained in small to large scale environments using spatiotemporal performance metrics. The results identified potential road vulnerabilities, provided a better understanding of CAV performance, and constructed a viable algorithm framework that can be used in future analyses.en_US
dc.rightsEMBARGO_NOT_AUBURNen_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleDevelopment of a Reinforcement Learning Environment for Connected Autonomous Vehicle Performance Optimization and Security Analysisen_US
dc.typeMaster's Thesisen_US
dc.embargo.lengthMONTHS_WITHHELD:24en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2027-05-07en_US
dc.contributor.committeeLaurence, Rilett
dc.contributor.committeeCottam, Adrian
dc.contributor.committeeSeals, Cheryl

Files in this item

Show simple item record