IPOD Abstract for presentation (Poster or Podium)
Intelligent Transportation Systems
Haley Townsend, PMP, SSM, MS (she/her/hers)
Data Scientist
Noblis
Galloway, OH, United States
Haley Townsend, PMP, SSM, MS (she/her/hers)
Data Scientist
Noblis
Galloway, OH, United States
Haley Townsend, PMP, SSM, MS (she/her/hers)
Data Scientist
Noblis
Galloway, OH, United States
Anand Seshadri, n/a
Systems Engineer
Noblis
Reston, Virginia, United States
Anand Seshadri, n/a
Systems Engineer
Noblis
Reston, Virginia, United States
Sammy Fellah, N/A
Machine Learning Intern
Noblis
Blacksburg, Virginia, United States
Sammy Fellah, N/A
Machine Learning Intern
Noblis
Blacksburg, Virginia, United States
Sammy Fellah, N/A
Machine Learning Intern
Noblis
Blacksburg, Virginia, United States
Haley Townsend, PMP, SSM, MS (she/her/hers)
Data Scientist
Noblis
Galloway, OH, United States
Background and Scope: Autonomous systems rely on artificial intelligence (AI)-based perception systems to make informed decisions about their maneuvers or actions. Unlike human perception, AI perception lacks situational awareness, historic knowledge, or common sense. Seemingly minor data aberrations (e.g., a partially obscured pedestrian in a video frame due to fog or from carrying a package) can lead to major perception failures that directly impact the safety of road users. Edge cases, or unknown unsafe scenarios that are rare and randomly distributed, are difficult to account for in model development, validation, and testing without driving hundreds of thousands of miles to hopefully encounter them “in the wild.”
Objective: Transportation agencies at all levels—including local, state, and federal agencies—are responsible for supporting the safe deployment of technology. AI-based perception systems are becoming increasingly prevalent in roadway infrastructure and vehicles alike. Being able to test these systems against edge cases and identify vulnerabilities before they lead to severe injury or worse is critical for the safe deployment of autonomous systems on our nation’s roadways. Transportation agencies need to be able to test and evaluate multiple autonomous systems and proposals for their deployment (e.g., from a procurement mechanism) in a scalable, comprehensive, and cost-effective manner to ensure they meet safety criteria against edge cases.
Preliminary
Results: The Noblis research team developed an approach to create realistic, yet fully synthetic, edge case image data capable of fooling a common computer vision framework (i.e., You Only Look Once version 8 (YOLOv8)), which underlies many AI-based perception systems. This approach leverages generative AI tools (GATs), including ChatGPT and Leonardo AI (an AI image generator), and reinforcement learning (RL) to create synthetic image data biased towards unsafe, edge case situations. The images generated during this research contain situations that often lead to missed or incorrect detections (e.g., rain combined with unfamiliar accessories such as ponchos and umbrellas, vulnerable road users in wheelchairs, obscured scenes). These kinds of edge cases, when mishandled by autonomous systems, directly impact the safety of road users.
The Noblis research team has demonstrated the feasibility of this approach for generating novel, realistic, and safety-impacting edge case image data that would otherwise be very difficult and/or costly to reproduce in the field during data collection. This work could help inform a generalized framework for safety validation and evaluation of autonomous systems in transportation (e.g., automated vehicles, intersection safety systems leveraging AI-based object detection).