Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study
The inherent complexity of Artificial Intelligence (AI) and Machine Learning (ML) tools creates significant barriers for non-expert users. Traditional ML workflows require specialized programming and statistical knowledge, limiting widespread adoption across various domains where these technologies...
| Main Author: | |
|---|---|
| Format: | Thesis (University of Nottingham only) |
| Language: | English |
| Published: |
2025
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/81343/ |
| _version_ | 1848801316731617280 |
|---|---|
| author | Sahidon, Muhammad Alif Danial |
| author_facet | Sahidon, Muhammad Alif Danial |
| author_sort | Sahidon, Muhammad Alif Danial |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | The inherent complexity of Artificial Intelligence (AI) and Machine Learning (ML) tools creates significant barriers for non-expert users. Traditional ML workflows require specialized programming and statistical knowledge, limiting widespread adoption across various domains where these technologies could provide substantial benefits. This research aimed to develop and evaluate VisAutoML, an automated machine learning tool specifically designed to provide non-expert users with a transparent and user-friendly ML development experience for tabular data. The study sought to identify key factors influencing tool acceptance, understand specific challenges faced by non-experts, and create novel design principles to address these challenges. The research employed a five-stage iterative user-centered design methodology. This included a mixed-method study utilizing an extended Technology Acceptance Model (TAM) to identify acceptance factors and user challenges. The design integrated technology-enhanced scaffolding and Explainable Artificial Intelligence (XAI) principles tailored for non-experts, featuring visualizations of activities, demonstration of scaffold functions, contextually relevant support, and progressive disclosure of XAI visualizations. Two versions of VisAutoML were developed and evaluated against both commercial alternatives and established benchmarks. Initial comparison between VisAutoML 1.0 and H2O AutoML showed significantly higher System Usability Scale scores (61.5 vs 38.5) for VisAutoML and a 20.94% increase in correct answers on knowledge assessments. The redesigned VisAutoML 2.0 demonstrated substantial improvements, with 75% of participants completing ML model development tasks in under 5 minutes. User Experience Questionnaire results showed 'good' scores for pragmatic quality (M=1.60, SD=0.912) and 'excellent' scores for hedonic quality (M=1.59, SD=0.899) and overall usability (M=1.60, SD=0.851). Trust measures were moderate (M=26.11, SD=4.67), while perceived explainability ratings were high (M=161.9, SD=36.24). This research contributes to the field by extending the TAM framework for understanding non-expert AutoML requirements, introducing empirically grounded design principles for usable and transparent AutoML systems, and successfully developing VisAutoML 2.0 with demonstrably enhanced usability and transparency. These contributions provide valuable guidance for making ML more accessible to broader audiences, advancing the democratization of AI technologies beyond technical specialists. Future work should explore additional application domains and further refinements of scaffolding and XAI approaches. |
| first_indexed | 2025-11-14T21:05:31Z |
| format | Thesis (University of Nottingham only) |
| id | nottingham-81343 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-14T21:05:31Z |
| publishDate | 2025 |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-813432025-07-26T04:40:29Z https://eprints.nottingham.ac.uk/81343/ Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study Sahidon, Muhammad Alif Danial The inherent complexity of Artificial Intelligence (AI) and Machine Learning (ML) tools creates significant barriers for non-expert users. Traditional ML workflows require specialized programming and statistical knowledge, limiting widespread adoption across various domains where these technologies could provide substantial benefits. This research aimed to develop and evaluate VisAutoML, an automated machine learning tool specifically designed to provide non-expert users with a transparent and user-friendly ML development experience for tabular data. The study sought to identify key factors influencing tool acceptance, understand specific challenges faced by non-experts, and create novel design principles to address these challenges. The research employed a five-stage iterative user-centered design methodology. This included a mixed-method study utilizing an extended Technology Acceptance Model (TAM) to identify acceptance factors and user challenges. The design integrated technology-enhanced scaffolding and Explainable Artificial Intelligence (XAI) principles tailored for non-experts, featuring visualizations of activities, demonstration of scaffold functions, contextually relevant support, and progressive disclosure of XAI visualizations. Two versions of VisAutoML were developed and evaluated against both commercial alternatives and established benchmarks. Initial comparison between VisAutoML 1.0 and H2O AutoML showed significantly higher System Usability Scale scores (61.5 vs 38.5) for VisAutoML and a 20.94% increase in correct answers on knowledge assessments. The redesigned VisAutoML 2.0 demonstrated substantial improvements, with 75% of participants completing ML model development tasks in under 5 minutes. User Experience Questionnaire results showed 'good' scores for pragmatic quality (M=1.60, SD=0.912) and 'excellent' scores for hedonic quality (M=1.59, SD=0.899) and overall usability (M=1.60, SD=0.851). Trust measures were moderate (M=26.11, SD=4.67), while perceived explainability ratings were high (M=161.9, SD=36.24). This research contributes to the field by extending the TAM framework for understanding non-expert AutoML requirements, introducing empirically grounded design principles for usable and transparent AutoML systems, and successfully developing VisAutoML 2.0 with demonstrably enhanced usability and transparency. These contributions provide valuable guidance for making ML more accessible to broader audiences, advancing the democratization of AI technologies beyond technical specialists. Future work should explore additional application domains and further refinements of scaffolding and XAI approaches. 2025-07-26 Thesis (University of Nottingham only) NonPeerReviewed application/pdf en https://eprints.nottingham.ac.uk/81343/1/Sahidon%2C%20Alif%2C%2018024387.pdf Sahidon, Muhammad Alif Danial (2025) Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study. PhD thesis, University of Nottingham. automated machine learning (AutoML); non‑expert users; explainable AI (XAI); user‑centered design; technology acceptance model (TAM) |
| spellingShingle | automated machine learning (AutoML); non‑expert users; explainable AI (XAI); user‑centered design; technology acceptance model (TAM) Sahidon, Muhammad Alif Danial Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| title | Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| title_full | Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| title_fullStr | Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| title_full_unstemmed | Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| title_short | Bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| title_sort | bridging the automated machine learning transparency and usability gap for non-experts: a user-centered design and evaluation study |
| topic | automated machine learning (AutoML); non‑expert users; explainable AI (XAI); user‑centered design; technology acceptance model (TAM) |
| url | https://eprints.nottingham.ac.uk/81343/ |