Exploring the balance between explainability and automation in AI-enabled data policy enforcement systems

Adedayo Hakeem Kukoyi *

Purdue University, Department of Information Technology- Data Analytics, West Lafayette, Indiana, United States of America.
 
Research Article
World Journal of Advanced Research and Reviews, 2023, 20(03), 2429–2434
Article DOI: 10.30574/wjarr.2023.20.3.2423
 
Publication history: 
Received on 17 October 2023; revised on 20 December 2023; accepted on 28 December 2023
 
Abstract: 
Adopting artificial intelligence (AI) technologies for data governance and compliance monitoring has changed how organizations implement their data policies. At the same time, technological innovations create tensions between explainability, understanding the reasoning behind the AI’s decisions, and automation, which focuses on streamlining processes to the greatest extent possible. This research aims to address the question of potential trade-offs between the two competing goals of AI-assisted systems for data policy enforcement. A quantitative research design was used to sample 100 respondents from data management, cybersecurity, and AI governance communities, and responses were interpreted using descriptive statistics (frequency and percentage). Findings show that although 70% of respondents see assurance of compliance as a reason for automated systems to be transparent and interpretable, 65% regard automation as the primary goal that most automation frameworks should achieve. Results point to the need to hybrid governance frameworks to XAI (explainable artificial intelligence) approaches that may be needed to weave into governance automated frameworks. The study offers suggestions for balancing ethical explainability and operational blockade dismissal in AI system automation frameworks.
 
Keywords: 
Explainable AI; Automation; Data Policy Enforcement; Governance; Transparency; Accountability
 
Full text article in PDF: 
Share this