mediaTUM
Universitätsbibliothek
Technische Universität München
Benutzer: Gast
Login
de
en
Alle Daten
Titel
Autor
Jahr
Alle Daten
Titel
Autor
Jahr
Alle Daten
Titel
Autor
Jahr
Mehr Felder
Zurücksetzen
Suchen
Einfache Suche
mediaTUM Gesamtbestand
Hochschulbibliographie
Elektronische Prüfungsarbeiten
Open Access Publikationen
Forschungsdaten
TUM.University Press
Sammlungen
Projekte
Einrichtungen
Forschungszentren
Hochschulpräsidium
Hochschulreferate
Partnerschaftliche Einrichtungen
Schools
TUM School of Computation, Information and Technology
(45859)
TUM School of Engineering and Design
TUM School of Life Sciences
TUM School of Management
TUM School of Medicine and Health
Dekanat
(6)
Prüfungsarbeiten
(6052)
Departments
(48111)
2012
(29)
2013
(23)
2014
(25)
2015
(18)
2016
(9)
2017
(12)
2018
(6)
2019
(17)
2020
(25)
2021
(25)
2022
(37)
2023
(3)
Clinical Medicine
(37428)
Health and Sport Sciences
Preclinical Medicine
(5431)
Interdisziplinäres Zentrum für Zelltherapie
Roman Herzog Comprehensive Cancer Center
(210)
School Office
Zentrum für Präklinische Forschung
Ehemalige Einrichtungen
(2180)
TUM School of Natural Sciences
(14811)
TUM School of Social Sciences and Technology
(9137)
TUM Campus Straubing für Biotechnologie und Nachhaltigkeit
Serviceeinrichtungen
TUM Institute for LifeLong Learning
Zentrale Verwaltung
mediaTUM Gesamtbestand
Einrichtungen
Schools
TUM School of Medicine and Health
Departments
Zurück
Zurück zum Anfang der Trefferliste
Dauerhafter Link zum angezeigten Objekt
Titel:
Comparing Commercial and Open-Source Large Language Models for Labeling Chest Radiograph Reports.
Dokumenttyp:
Journal Article; Comparative Study
Autor(en):
Dorfner, Felix J; Jürgensen, Liv; Donle, Leonhard; Al Mohamad, Fares; Bodenmann, Tobias R; Cleveland, Mason C; Busch, Felix; Adams, Lisa C; Sato, James; Schultz, Thomas; Kim, Albert E; Merkow, Jameson; Bressem, Keno K; Bridge, Christopher P
Abstract:
Background Rapid advances in large language models (LLMs) have led to the development of numerous commercial and open-source models. While recent publications have explored OpenAI's GPT-4 to extract information of interest from radiology reports, there has not been a real-world comparison of GPT-4 to leading open-source models. Purpose To compare different leading open-source LLMs to GPT-4 on the task of extracting relevant findings from chest radiograph reports. Materials and Methods Two independent datasets of free-text radiology reports from chest radiograph examinations were used in this retrospective study performed between February 2, 2024, and February 14, 2024. The first dataset consisted of reports from the ImaGenome dataset, providing reference standard annotations from the MIMIC-CXR database acquired between 2011 and 2016. The second dataset consisted of randomly selected reports created at the Massachusetts General Hospital between July 2019 and July 2021. In both datasets, the commercial models GPT-3.5 Turbo and GPT-4 were compared with open-source models that included Mistral-7B and Mixtral-8 × 7B (Mistral AI), Llama 2-13B and Llama 2-70B (Meta), and Qwen1.5-72B (Alibaba Group), as well as CheXbert and CheXpert-labeler (Stanford ML Group), in their ability to accurately label the presence of multiple findings in radiograph text reports using zero-shot and few-shot prompting. The McNemar test was used to compare F1 scores between models. Results On the ImaGenome dataset (n = 450), the open-source model with the highest score, Llama 2-70B, achieved micro F1 scores of 0.97 and 0.97 for zero-shot and few-shot prompting, respectively, compared with the GPT-4 F1 scores of 0.98 and 0.98 (P > .99 and < .001 for superiority of GPT-4). On the institutional dataset (n = 500), the open-source model with the highest score, an ensemble model, achieved micro F1 scores of 0.96 and 0.97 for zero-shot and few-shot prompting, respectively, compared with the GPT-4 F1 scores of 0.98 and 0.97 (P < .001 and > .99 for superiority of GPT-4). Conclusion Although GPT-4 was superior to open-source models in zero-shot report labeling, few-shot prompting with a small number of example reports closely matched the performance of GPT-4. The benefit of few-shot prompting varied across datasets and models. © RSNA, 2024 Supplemental material is available for this article.
Zeitschriftentitel:
Radiology
Jahr:
2024
Band / Volume:
313
Heft / Issue:
1
Volltext / DOI:
doi:10.1148/radiol.241139
PubMed:
http://view.ncbi.nlm.nih.gov/pubmed/39470431
Print-ISSN:
0033-8419
TUM Einrichtung:
Institut für Diagnostische und Interventionelle Radiologie (Prof. Makowski)
BibTeX
Vorkommen:
mediaTUM Gesamtbestand
Hochschulbibliographie
2024
Schools und Fakultäten
TUM School of Medicine and Health
Institut für Radiologie
mediaTUM Gesamtbestand
Einrichtungen
Schools
TUM School of Medicine and Health
Departments
Clinical Medicine
Institut für Diagnostische und Interventionelle Radiologie (Prof. Makowski)
2024