Completion Date

Spring 4-14-2024

Document Type

Thesis

Degree Name

Master of Science (MS)

Program or Discipline Name

Computer and Information Sciences

Secondary Program or Discipline Name

Cybersecurity Operations and Control Management

First Advisor

Sangwhan Cha

Second Advisor

Majid Shalan

Abstract

In the rapidly evolving field of artificial intelligence (AI), deep learning models' interpretability

and reliability are severely hindered by their complexity and opacity. Enhancing the

transparency and interpretability of AI systems for humans is the primary objective of the

emerging field of explainable AI (XAI). The attention mechanisms at the heart of XAI's work

are based on human cognitive processes. Neural networks can now dynamically focus on

relevant parts of the input data thanks to these mechanisms, which enhances interpretability

and performance. This report covers in-depth talks of attention mechanisms in neural networks

within XAI, as well as an analysis of the theoretical foundations, architectural applications, and

empirical evidence showing how well they work to improve model transparency. The report

provides a comprehensive analysis of the role of attention mechanisms in AI models to address

ethical concerns, comply with regulatory requirements, and foster a deeper understanding and

trust in AI systems. The report contributes to the discussion about bringing AI closer to human

values and cognitive processes so that its advancements are impactful and responsible by

conducting a thorough analysis.

Available for download on 365

Share

COinS