AI-Enhanced Clinical Decision Support: Trustworthiness, Explainability, and Ethical Challenges- Group 1

Term: 
2025-2026 Fall
Faculty Department of Project Supervisor: 
Faculty of Engineering and Natural Sciences
Number of Students: 
3

Context
The integration of Artificial Intelligence (AI) into healthcare decision-making has created both opportunities and critical challenges. While clinical decision support systems powered by AI can accelerate diagnosis, predict risks, and guide treatments, their acceptance depends heavily on trustworthiness, transparency, and ethical use. Many systems remain “black boxes,” leaving clinicians hesitant to rely on them in life-critical decisions. This project will investigate how AI can be made more explainable, auditable, and ethically aligned with healthcare practices.
Research Objectives

  • Explainability Frameworks: Explore state-of-the-art methods (e.g., SHAP, LIME, counterfactual explanations) for improving interpretability of medical AI models.
  • Bias & Fairness Audits: Identify and quantify epistemic biases and algorithmic determinism in decision support pipelines.
  • Trustworthiness Metrics: Use multi-dimensional evaluation criteria that balance accuracy, interpretability, and clinical usability.
  • Ethical Guidelines: Develop case studies addressing dilemmas such as over-reliance, under-reliance, and patient autonomy.

Programs & Tools Students Can Use

  • Machine Learning Frameworks: Python-based TensorFlow, PyTorch, Scikit-learn for predictive modeling.
  • Explainability Libraries: SHAP, LIME, Captum (PyTorch) for model interpretability.
  • Medical Data Standards: FHIR (Fast Healthcare Interoperability Resources), HL7 for structuring clinical data.
  • Data Processing: Pandas, NumPy, and SQL/NoSQL databases for healthcare datasets.
  • Visualization Tools: Matplotlib, Seaborn, Plotly Dash for presenting model explanations.
  • Ethics & Compliance Tools: EU AI Act guidelines, bioethics case analysis software (NVivo for qualitative coding).

Expected Outcomes

  • A prototype decision support pipeline with integrated explainability features.
  • A whitepaper on ethical and regulatory considerations in clinical AI adoption.
  • Publication-ready results linking technical outcomes with medical ethics.

Student Gains
Students will gain hands-on experience in applied machine learning, healthcare data analysis, XAI (explainable AI), and bioethics. They will also engage with real-world case studies, preparing them for interdisciplinary careers bridging AI and health sciences. Students will also learn to design AI pipelines compatible with healthcare regulations, simulate trustworthiness audits, and document compliance for clinical trial-like environments.
Requirement
Knowledge of Python and at least one ML framework (PyTorch or TensorFlow) is mandatory. Familiarity with explainability libraries (e.g., SHAP, LIME) or prior projects in healthcare data are a big plus.
Related Areas
[Computer Science and Engineering, Artificial Intelligence, Machine Learning, Bioinformatics, Health Informatics, Psychology, Medical Sciences, Ethics, Business Analytics]
Application Process (Final Note):
You should prepare:
Latest Academic Transcript (official or system-generated PDF from your student portal)
1-page Letter of Interest explaining:

  • why you are interested in the project,
  • any prior experience (coursework, projects, internships),
  • how you want to contribute (e.g., technical development, policy analysis, data science).

After you complete your online application, email the documents above to polat.goktas@sabanciuniv.edu with the chosen Project Name in the subject line.

Related Areas of Project: 
Computer Science and Engineering
Molecular Biology, Genetics and Bioengineering
Psychology
Business Analytics

About Project Supervisors