Skip to main content

Validation and transparency for AI-based diagnosis and prognosis in healthcare

Edited by:

Wouter van Amsterdam, MD, PhD, University Medical Center Utrecht, The Netherlands
Anne de Hond, PhD, University Medical Center Utrecht, The Netherlands
Maarten van Smeden, PhD, University Medical Center Utrecht, The Netherlands

Submission Status: Open   |   Submission Deadline: 31 January 2025 
 

The Journal Diagnostic and Prognostic Research is calling for submissions to our Collection on Validation and transparency for AI-based diagnosis and prognosis in healthcare.


Image credit: © [M] Parradee / stock.adobe.com

New Content ItemThis Collection supports and amplifies research related to SDG 3: Good Health & Wellbeing and SDG 9: Industry & Innovation.

About the Collection

Artificial intelligence (AI) has led to a surge in the development of diagnostic and prognostic models in healthcare. Some of these AI models have demonstrated remarkable performance, rivalling that of physicians, particularly in the context of diagnostic imaging. However, concerns persist regarding the validity and transparency of these models. Rigorous validation is essential to ensure that AI-based prognosis and diagnosis can be used safely and accurately in clinical practice. Transparency is crucial to gain trust in these algorithms and facilitate accountability. To address this, we invite contributions to our Collection focused on the validation and transparency of AI-based diagnosis and prognosis.  

Authors can submit manuscripts as Research articles, Methodology papers, Reviews, Protocols, and Commentaries. This Collection may include the following topics, although not limited to: 

- Methodological research investigating novel ways to rigorously validate AI prognosis and diagnosis, including methods pertaining to Large Language Models.

- Methodological research on state-of-the-art methods to enhance the transparency of AI-based diagnosis and prognosis.

- Applied research on AI-based diagnosis or prognosis with a focus on rigorous validation or explainable AI methods.

- Applied research assessing the impact of AI-based diagnosis and prognosis across diverse patient populations to address fairness concerns.

- Impact studies assessing the added value of AI-based diagnosis or prognosis for decision-making.

Meet the Guest Editors

Back to top

Wouter van Amsterdam, MD, PhD, University Medical Center Utrecht, The Netherlands

Dr. Wouter van Amsterdam is an assistant professor at the Julius Center for Health Sciences and Primary Care, UMC Utrecht, the Netherlands. He is working on methods and applications of machine learning and causal inference for health care. His focus is the intersection of machine learning and causal inference. This intersection goes two ways: machine learning can improve typical causal inference tasks such as estimating individual treatment effects and doing prediction-under-intervention. On the other hand, causal inference provides a formal way to understanding and improving prediction model generalization and robustness. Wouter holds degrees in Physics (BSc), Medicine (MD), epidemiology (MSc) and a PhD on machine learning for healthcare co-advised by Rajesh Ranganath from New York University.
 

Anne de Hond, PhD, University Medical Center Utrecht, The Netherlands

Dr. Anne de Hond is an assistant professor in the data science program of the Julius Center for Health Sciences and Primary Care, UMC Utrecht, the Netherlands. She completed a master's degree in econometrics and worked as a data scientist and researcher in the AI team at Leiden UMC’s CAIRELab during her PhD studies. Amongst other applications, she developed AI models to predict hospital admission at the emergency department and depression risk in patients with cancer. She also collaborated with various AI startups to investigate the practical implementation of their software. Her current research focusses on validation methods for clinical AI, including large language models like ChatGPT, as well as explainability and fairness of AI algorithms.

Dr. de Hond serves as Associate Editor for Diagnostic and Prognostic Research.
 

Maarten van Smeden, PhD, University Medical Center Utrecht, The Netherlands

Dr. Maarten van Smeden is a medical statistician and associate professor at the Julius Center for Health Sciences and Primary Care, UMC Utrecht, the Netherlands. He is coordinating the methodology research program, is member of the leadership team of the Julius Center and head of the data science team at the Julius Center. His research is focused on the development and evaluation of statistical methodology, with a particular interest in methodology related to the development and validation of prediction models based on machine learning and artificial intelligence. He is director of the AI methods lab in the UMC Utrecht.

Dr. van Smeden serves as Associate Editor for Diagnostic and Prognostic Research.
 

There are currently no articles in this collection.

Submission Guidelines

Back to top

This Collection welcomes submission of Research articles, Methodology papers, Reviews, Protocols, and Commentaries. Should you wish to submit a different article type, please read our submission guidelines to confirm that type is accepted by the journal. 

Articles for this Collection should be submitted via our submission system, Editorial Manager. Please select the appropriate Collection title “Validation and transparency for AI-based diagnosis and prognosis in healthcare" from the dropdown menu.

Articles will undergo the journal’s standard peer-review process and are subject to all the journal’s standard policies. Articles will be added to the Collection as they are published.

The Editors have no competing interests with the submissions which they handle through the peer-review process. The peer-review of any submissions for which the Editors have competing interests is handled by another Editorial Board Member who has no competing interests.