User: Guest  Login
More Searchfields
Simple search
Title:

Comparing Commercial and Open-Source Large Language Models for Labeling Chest Radiograph Reports.

Document type:
Journal Article; Comparative Study
Author(s):
Dorfner, Felix J; Jürgensen, Liv; Donle, Leonhard; Al Mohamad, Fares; Bodenmann, Tobias R; Cleveland, Mason C; Busch, Felix; Adams, Lisa C; Sato, James; Schultz, Thomas; Kim, Albert E; Merkow, Jameson; Bressem, Keno K; Bridge, Christopher P
Abstract:
Background Rapid advances in large language models (LLMs) have led to the development of numerous commercial and open-source models. While recent publications have explored OpenAI's GPT-4 to extract information of interest from radiology reports, there has not been a real-world comparison of GPT-4 to leading open-source models. Purpose To compare different leading open-source LLMs to GPT-4 on the task of extracting relevant findings from chest radiograph reports. Materials and Methods Two independent datasets of free-text radiology reports from chest radiograph examinations were used in this retrospective study performed between February 2, 2024, and February 14, 2024. The first dataset consisted of reports from the ImaGenome dataset, providing reference standard annotations from the MIMIC-CXR database acquired between 2011 and 2016. The second dataset consisted of randomly selected reports created at the Massachusetts General Hospital between July 2019 and July 2021. In both datasets, the commercial models GPT-3.5 Turbo and GPT-4 were compared with open-source models that included Mistral-7B and Mixtral-8 × 7B (Mistral AI), Llama 2-13B and Llama 2-70B (Meta), and Qwen1.5-72B (Alibaba Group), as well as CheXbert and CheXpert-labeler (Stanford ML Group), in their ability to accurately label the presence of multiple findings in radiograph text reports using zero-shot and few-shot prompting. The McNemar test was used to compare F1 scores between models. Results On the ImaGenome dataset (n = 450), the open-source model with the highest score, Llama 2-70B, achieved micro F1 scores of 0.97 and 0.97 for zero-shot and few-shot prompting, respectively, compared with the GPT-4 F1 scores of 0.98 and 0.98 (P > .99 and < .001 for superiority of GPT-4). On the institutional dataset (n = 500), the open-source model with the highest score, an ensemble model, achieved micro F1 scores of 0.96 and 0.97 for zero-shot and few-shot prompting, respectively, compared with the GPT-4 F1 scores of 0.98 and 0.97 (P < .001 and > .99 for superiority of GPT-4). Conclusion Although GPT-4 was superior to open-source models in zero-shot report labeling, few-shot prompting with a small number of example reports closely matched the performance of GPT-4. The benefit of few-shot prompting varied across datasets and models. © RSNA, 2024 Supplemental material is available for this article.
Journal title abbreviation:
Radiology
Year:
2024
Journal volume:
313
Journal issue:
1
Fulltext / DOI:
doi:10.1148/radiol.241139
Pubmed ID:
http://view.ncbi.nlm.nih.gov/pubmed/39470431
Print-ISSN:
0033-8419
TUM Institution:
Institut für Diagnostische und Interventionelle Radiologie (Prof. Makowski)
 BibTeX