Date Approved

2024

Degree Type

Open Access Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department or School

College of Engineering and Technology

Committee Member

Samir Tout

Committee Member

Suleiman Ashur

Committee Member

Robert Carpenter

Committee Member

William Koolage

Abstract

Background: The rapid growth of automated systems and artificial intelligence (AI), particularly, self-driving cars (SDCs), has attracted significant investments and can potentially contribute to humanity’s flourishing. However, before widespread adoption, it is important to address ethical violations such as bias in AI, highlighted by many real-world cases of bias in AI leading to unfair outcomes in tools like facial recognition, hiring software, and pedestrian detection. Bias in AI can lead to potentially fatal outcomes in SDCs, emphasizing the need for a thorough examination of bias in SDCs.

Purpose: To enhance AI ethics by providing tools to support transparency and value- sensitive design in self-driving cars (SDCs).

Methods: The four-methodology framework (a) an AI value-mapping database, (b) a consumer values and SDC acceptance survey, (c) an ethical violation analysis and risk assessment (EVARA), and (d) a demonstration of the open ethics data passport for AI bias mitigation in SDCs.

Findings: The study's findings indicate that human welfare, universal usability, and trust were the prevalent values in the AIVMDB filtering analysis case study output, and significantly influenced SDC acceptance. Human welfare was highlighted as a critical value in the EVARA case study, with its violation posing a high risk. While the OEDP is not fully demonstrated in a case study, its elements were explained to showcase its potential utility and application and revealed patterns and areas of potential mitigation.

Contribution: The results of the AIVMDB, survey, EVARA, and OEDP analysis revealed concerning patterns in machine learning (ML) model training data labeling and gathering practices that can be easily identified and potentially mitigated.

Conclusions: Aligning AI systems like SDCs with human values is achievable but requires careful attention. Identifying and mitigating ethical issues in ML model training data is crucial, as these mistakes can have severe consequences. Collaborative efforts are needed to ensure AI's positive impact on society while minimizing potential harm.

Share

COinS