Online Workshop: March 1, 2022
The recording is available at: https://youtu.be/cM7I357nrpA.
For poster sessions: posters can be viewed here. If interested in a poster, you can click the corresponding Zoom link. The authors are presenting their poster in their Zoom.

AI for healthcare has emerged into a very active research area in the past few years and has made significant progress. AI methods have achieved human-level performance in skin cancer classification, diabetic eye disease detection, chest radiograph diagnosis, sepsis treatment, etc.

While existing results are encouraging, not too many clinical AI solutions are deployed in hospitals or actively utilized by physicians. A major problem is that existing clinical AI methods are less trustworthy. For example, existing approaches make clinical decisions in a black-box way, which renders the decisions difficult to understand and less transparent. Existing solutions are not robust to small perturbations or potentially adversarial attacks, which raises security and privacy concerns. In addition, existing methods are often biased to specific ethinic groups or subpopulations. These biases may result in unfair predictions that are less reliable for other ethinic groups or subpopulations. All these problems render the existing solutions less trustworthy. As a result, physicians are reluctant to use these solutions since clinical decisions are mission-critical and have to be made with high trust and reliability.

In this workshop, we aim to address the trustworthy issues of clinical AI solutions. We aim to bring together researchers in AI, healthcare, medicine, NLP, social science, etc. and facilitate discussions and collaborations in developing trustworthy AI methods that are reliable and more acceptable to physicians.

The workshop will be an one-day workshop, featuring speakers, panelists, and poster presenters from machine learning, biomedical informatics, natural language processing, statistics, behavior science, etc., covering topics which include but are not limited to:

  • Interpretable AI methods for healthcare
  • Robustness of clinical AI methods
  • Medical knowledge grounded AI
  • Physician-in-the-loop AI
  • Security and privacy in clinical AI
  • Fairness in AI for healthcare
  • Ethics in AI for healthcare
  • Robust and interpretable natural language processing for healthcare
  • Methods for robust weak supervision