Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare
The integration of AI chatbots into healthcare systems presents transformative potential to enhance patient access, assist clinical decision-making, and streamline administrative workflows. Despite these advantages, the deployment of AI chatbots introduces significant concerns related to bias, which...
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English English |
| Published: |
INTI International University
2025
|
| Subjects: | |
| Online Access: | http://eprints.intimal.edu.my/2156/ http://eprints.intimal.edu.my/2156/1/jods2025_14.pdf http://eprints.intimal.edu.my/2156/2/699 |
| _version_ | 1848766935767973888 |
|---|---|
| author | Amareshwari, Parunandhi Sathwika, Nagasamudrala Laxmi Prasanna, Mendu Sreeja, Somanaboina |
| author_facet | Amareshwari, Parunandhi Sathwika, Nagasamudrala Laxmi Prasanna, Mendu Sreeja, Somanaboina |
| author_sort | Amareshwari, Parunandhi |
| building | INTI Institutional Repository |
| collection | Online Access |
| description | The integration of AI chatbots into healthcare systems presents transformative potential to enhance patient access, assist clinical decision-making, and streamline administrative workflows. Despite these advantages, the deployment of AI chatbots introduces significant concerns related to bias, which can diminish care quality and reinforce existing health disparities. This paper investigates the key sources of bias in AI chatbots, including dataset imbalances, algorithmic design flaws, and linguistic biases that may perpetuate stereotypes. These forms of bias can lead to misdiagnoses, inequitable treatment suggestions, and a breakdown of trust in AI-driven tools, particularly affecting marginalized or underserved populations. The study underscores the broader consequences of biased AI systems in healthcare, such as reinforcing discrimination and widening healthcare inequalities. To confront these challenges, the paper outlines methodologies for bias detection, including the use of fairness metrics and testing across diverse demographic cohorts. It also discusses mitigation strategies like representative data sampling, algorithmic refinement, feedback loops, and human oversight to ensure ethical and equitable AI usage. |
| first_indexed | 2025-11-14T11:59:03Z |
| format | Article |
| id | intimal-2156 |
| institution | INTI International University |
| institution_category | Local University |
| language | English English |
| last_indexed | 2025-11-14T11:59:03Z |
| publishDate | 2025 |
| publisher | INTI International University |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | intimal-21562025-07-07T07:13:26Z http://eprints.intimal.edu.my/2156/ Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare Amareshwari, Parunandhi Sathwika, Nagasamudrala Laxmi Prasanna, Mendu Sreeja, Somanaboina QA75 Electronic computers. Computer science QA76 Computer software T Technology (General) TK Electrical engineering. Electronics Nuclear engineering The integration of AI chatbots into healthcare systems presents transformative potential to enhance patient access, assist clinical decision-making, and streamline administrative workflows. Despite these advantages, the deployment of AI chatbots introduces significant concerns related to bias, which can diminish care quality and reinforce existing health disparities. This paper investigates the key sources of bias in AI chatbots, including dataset imbalances, algorithmic design flaws, and linguistic biases that may perpetuate stereotypes. These forms of bias can lead to misdiagnoses, inequitable treatment suggestions, and a breakdown of trust in AI-driven tools, particularly affecting marginalized or underserved populations. The study underscores the broader consequences of biased AI systems in healthcare, such as reinforcing discrimination and widening healthcare inequalities. To confront these challenges, the paper outlines methodologies for bias detection, including the use of fairness metrics and testing across diverse demographic cohorts. It also discusses mitigation strategies like representative data sampling, algorithmic refinement, feedback loops, and human oversight to ensure ethical and equitable AI usage. INTI International University 2025-06 Article PeerReviewed text en cc_by_4 http://eprints.intimal.edu.my/2156/1/jods2025_14.pdf text en cc_by_4 http://eprints.intimal.edu.my/2156/2/699 Amareshwari, Parunandhi and Sathwika, Nagasamudrala and Laxmi Prasanna, Mendu and Sreeja, Somanaboina (2025) Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare. Journal of Data Science, 2025 (14). pp. 1-13. ISSN 2805-5160 http://ipublishing.intimal.edu.my/jods.html |
| spellingShingle | QA75 Electronic computers. Computer science QA76 Computer software T Technology (General) TK Electrical engineering. Electronics Nuclear engineering Amareshwari, Parunandhi Sathwika, Nagasamudrala Laxmi Prasanna, Mendu Sreeja, Somanaboina Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare |
| title | Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare |
| title_full | Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare |
| title_fullStr | Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare |
| title_full_unstemmed | Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare |
| title_short | Data Science for AI Chatbot Bias Detection and Mitigation in Healthcare |
| title_sort | data science for ai chatbot bias detection and mitigation in healthcare |
| topic | QA75 Electronic computers. Computer science QA76 Computer software T Technology (General) TK Electrical engineering. Electronics Nuclear engineering |
| url | http://eprints.intimal.edu.my/2156/ http://eprints.intimal.edu.my/2156/ http://eprints.intimal.edu.my/2156/1/jods2025_14.pdf http://eprints.intimal.edu.my/2156/2/699 |