Rückblick auf die BVM 2021 an der OTH Regensburg
Machine learning approaches, and especially deep neural networks, have had tremendous success in medical imaging in the past few years. Machine learning-based image reconstruction techniques are used to acquire high-resolution images at a much faster pace than before and, in the cases of CT, with lower doses of ionizing radiation. Automated, quantitative image analysis with convolutional neural networks is now in many cases as accurate as the assessment of an expert observer. Imaging biomarkers extracted via machine learning are studied to improve diagnosis, prognosis, and treatment decisions, and the first autonomous AI systems have been approved for diagnostic use and for patient triage in emergency radiology settings.
Machine learning however requires training datasets that are representative of the target data to analyze, cover the range of variation that will be observed in the target data, and are carefully labelled, often with time-consuming manual annotation strategies that require input from clinical experts. This hampers the adoption of machine learning in many medical image analysis tasks. In this talk, we will discuss various approaches to make machine learning techniques work in practical situations, where training data is limited, data is highly heterogeneous, annotations are difficult to obtain, available annotations may be wrong, and training data may not be representative for the target data to analyze. Possible solutions include semi-supervised and weakly labeled learning, domain adaptation, and crowdsourcing of visual analysis.
We will also discuss the potential of direct, machine learning-based diagnostics and prognostics. Currently, most quantitative imaging biomarkers used for diagnosis and prognosis are factors that are already well-known to indicate disease, such as the density of lung tissue, which relates to lung function, or the size of certain brain structures, which may help to predict the development of dementia. With such image quantification designed by experts – and AI models trained to mimic these experts – simplifications are made and the focus is on a small number of easily quantifiable image aspects. Machine learning enables a new, more data-driven approach. Image characteristics related to disease outcome can be learned directly from databases that combine medical imaging data with patient outcomes (e.g., the clinical diagnosis, therapy outcome, or future disease progression). This fully exploits the rich information present in medical imaging data and does not require time-consuming and error-prone manual annotations. I will show that this can result in stronger, more predictive imaging biomarkers.
I will present examples in neuroimaging, pulmonary imaging, and vascular imaging applications.
Prof. Dr. Marleen de Bruijne is professor of AI in medical image analysis at Erasmus MC, The Netherlands, and at the University of Copenhagen, Denmark. She received an MSc degree in physics (1997) and PhD degree in medical imaging (2003) from Utrecht University. From 2003 to 2006 she was assistant professor and later associate professor at the IT University of Copenhagen, Denmark. Prof. de Bruijne has (co-)authored over 200 peer-reviewed full papers in international conferences and journals, holds 7 patents, and is the recipient of the prestigious NWO-VENI, NWO-VIDI, NWO-VICI, and DFF-YDUN awards. She has (co)-supervised 30 PhD students. She is program chair of the international conferences MICCAI (2021) and MIDL (2021, 2020) and is a regular member of the program committees of MIDL, MICCAI, SPIE Medical Imaging, ISBI, and IPMI. She is chair of the EMBS Technical Committee on Biomedical Imaging and Image Processing and member of the MICCAI board, the ISBI Steering Committee, the Information Processing in Medical Imaging (IPMI) board, and the editorial boards of Medical Image Analysis, Journal of MachinE Learning for Biomedical imAging, and Frontiers in ICT. Her research is in machine learning for quantitative image analysis and computer aided diagnosis in different application areas.
Convolutional Neural Networks (CNNs) have played a role in image analysis with several well-succeeded applications involving object detection, segmentation, and identification. The design of a CNN model traditionally relies on the pre-annotation of a large dataset, the choice of the model's architecture, and the tunning of the training hyperparameters. These models are sought as ``black-boxes'', implying that one cannot explain their decisions. Explainable artificial intelligence (XAI) has appeared to address the problem and avoid the wrong interpretation of the results. However, the importance of user and designer participation in the machine learning loop has called little attention yet.
In medical image computing, data annotation is costly, often scarce, and depends on an expert in the application domain (the user). The choice of the model's architecture and the training hyperparameter tunning rely on the network designer (an expert in AI). The user absence in the machine learning loop leaves essential questions with no answer (e.g., what are the most relevant samples for annotation?), while the lack of interactive methodologies to learn filters and model's architecture limits the designer to the interpretation of the model. The user and designer should then actively participate in the data annotation and training processes, both assisted by the machine, to increase human understanding and control, reduce human effort, and improve interpretation of the results.
This lecture addresses part of the above problems by presenting an interactive methodology for the design of CNN filters from markers in medical images, and a semi-automatic data annotation method guided by feature projection. The user starts the training process by selecting a few images per class and drawing strokes (markers) in regions that discriminate the classes. The designer defines an initial network architecture, and the filters of the CNN are automatically computed with no need for backpropagation. The user and designer may decide about the most suitable filters based on data visualization. The image features extracted by the CNN are projected in 2D for semi-automatic data annotation. The user analyzes the 2D projection, annotates the most challenging samples, while a semi-supervised classifier propagates the labels to the remaining ones. The annotated dataset can then be used to revisit the design of the CNN model, as illustrated for applications of medical image computing.
Alexandre Xavier Falcao (lids.ic.unicamp.br) is a full professor at the Institute of Computing (IC), University of Campinas (Unicamp), where he has worked since 1998.
He attended the Federal University of Pernambuco from 1984-1988, where he got a B.Sc. in Electrical Engineering. He then attended Unicamp, where he got an M.Sc. (1993), and a Ph.D. (1996), in Electrical Engineering, by working on volumetric data visualization and medical image segmentation. During his Ph.D., he worked with the Medical Image Processing Group at the University of Pennsylvania from 1994-1996. In 1997, he developed video quality assessment methods for Globo TV. In 2011-2012, he spent a one-year sabbatical at the Robert W. Holley Center for Agriculture and Health (USDA, Cornell University), working on image analysis applied to plant biology.He served as Associate Director of IC-Unicamp (2006-2007), Coordinator of its Post-Graduation Program (2009-2011), and Senior Area Editor of IEEE Signal Processing Letters (2016-2020). He is currently a research fellow at the top level for the Brazilian National Council for Scientific and Technological Development (CNPq), President of the Special Commission of Computer Graphics and Image Processing (CEGRAPI) for the Brazilian Computer Society (SBC), and Area Coordinator of Computer Science for the Sao Paulo Research Foundation (FAPESP).
Among several awards, it is worth mentioning two Unicamp inventor awards at the category "License Technology" (2011 e 2012), three awards of academic excellence (2006, 2011, 2016) from IC-Unicamp, one award of academic recognition "Zeferino Vaz" from Unicamp (2014), and the best paper award in the year of 2012 from the journal Pattern Recognition (received at Stockholm, Sweden, during the conference ICPR 2014).
His research work aims at computational models to learn and interpret the semantic content of images in the domain of several applications. The areas of interest include image and video processing, data visualization, medical image analysis, remote sensing, graph algorithms, image annotation, organization, and retrieval, and (interactive) machine learning and pattern recognition.
Talk
Artificial intelligence (AI) will revolutionize our daily life and will have tremendous impact on health care. Especially the influence in disciplines where imaging plays an important role seems to be substantial. Radiology, pathology and endoscopy will benefit from these developments. So far diagnosis of diseases by using images is based on the experience of the physician (radiologist, pathologist, endoscopist) and highly subjective with low inter- and intraobserver agreement.
Meanwhile AI has become routine in some parts of endoscopy such as screening colonoscopy. The quality of screening endoscopy depends on the number of detected polyps, which is called adenoma detection rate (ADR). Usually an ADR of at least 20% (women) or 25% (men) is recommended. Different techniques such as (virtual) chromoendoscopy, caps on the distal end of the endoscope or optimizing withdrawl time have shown to increase ADR. By using AI first randomized trials could show a significant increase of ADR, mainly for small polyps (< 5mm). However, it is questionable whether these small polyps have any clinical impact. Besides detection the differentiation of polyps is of major impact. First prototypes showed the possibility to differentiate adenoma from non-adenoma polyps which is of clinical relevance.
Meanwhile similar efforts are made for gastric cancer or esophageal cancer. Our group was the first worldwide to show that AI can differentiate normal Barrett mucosa from dysplastic mucosa, which is a precursor of cancer. Meanwhile we are able to detect cancer real time during endoscopy.
Besides detection and differentiation of polyps and cancer the invasion depth of a cancer is of clinical importance. Usually endoscopic ultrasound is used for staging early cancers to predict whether endoscopic treatment or surgery is necessary. AI seems to have the potential to diagnose the invasion depth of early tumors and so guiding the optimal therapy.
In addition AI can control the endoscopist during his procedure to avoid incomplete visual observation of the Gi-tract.
Clinical Focus
- Gastrointestinal Oncology
- Interventional Endoscopy
Postgraduate career
2019 | Secretary of the Society of Endoscopy of DGVS |
2019 | President elect ESGE |
Since 1/2002 | Head of the Department of Internal Medicine III (Gastroenterology, Hepatology, Gastrointestinal Oncology, Infectiology, Rheumatology, Intensive Care Medicine) Augsburg Medical Center, Augsburg, Germany |
1999 – 2001 | Consultant for GI-Cancer, University of Regensburg |
1998 – 2001 | Attending, Dept. of Gastroenterology, University of Regensburg |
1995 – 1998 | Resident, Dept. of Oncology/Hematology, University of Regensburg |
1992 – 1995 | Resident/fellow, Dept. of Gastroenterology, University of Regensburg |
1998 – 2000 | Postdoctoral Fellow, National Medical Laser Centre, University College London (Prof. S. Bowen) and Middlesex Hospital (Prof. Hatfield) |
1988 – 1992 | Resident, Dept. of Intern. Med., Krankenhaus Barmherzige Brüder, Regensburg |
1987 | Fellow, Institute of Pharmacology, University of Regensburg |
Societies
2016 – 2017 | President and Board Member of the Germany Society of Coloproctology Since |
2014 | Government Board Member and Treasurer of the European Society of Gastrointestinal Endoscopy (ESGE) |
2014 | President and Board Member of the Society of Gastroenterology in Bavaria e. V. 2012 President and Board Member of the German Society of Endoscopy and Imaging (DGE-BV) |
2010 | President of the German Society of Intensive Care Medicine (DGIIN) |
2008 | President of the German Society of Endoscopy |
2006 – 2016 | Adviser and Board Member of the German Society Digestive and Metabolic Diseases (DGVS) |
2006 | President of the Working Group for GI-Oncology of the DGVS |
1997 | Secretary of the Working Group for GI-Oncology of the DGVS |
Miscellaneous GI-Cancer
Co-author of the German Guidelines for Esophageal (S3) and Gastric (S3) cancer
Editor of „Gastroenterologische Onkologie“, Thieme Verlag ed. Messmann, Tannapfel, Werner, 2017
Listed in Focus as Expert for GI-Cancer