Trials / Completed
CompletedNCT07281066
LLM Performance in Endodontic Diagnostics
Evaluating ChatGPT-4o, Gemini and Claude 3.7 in Endodontic Diagnostics: A Prospective Clinical Study
- Status
- Completed
- Phase
- —
- Study type
- Observational
- Enrollment
- 120 (actual)
- Sponsor
- Marmara University · Academic / Other
- Sex
- All
- Age
- 18 Years – 65 Years
- Healthy volunteers
- Not accepted
Summary
The goal of this prospective observational study is to evaluate the ability of three large language models (ChatGPT-4o, Gemini Advanced, and Claude 3.7) to support diagnosis and treatment decision-making in adult patients presenting with common endodontic conditions. The main questions the study aims to answer are: Can LLMs accurately determine the endodontic diagnosis when provided with structured clinical information and periapical radiographs? Can LLMs propose appropriate treatment plans comparable to decisions made by endodontic specialists? To answer these questions, researchers will compare the diagnostic and treatment accuracy of three AI models using a consensus diagnosis from endodontic specialists as the reference standard. Participants will: Receive routine endodontic examination and periapical radiographs as part of standard clinical care. Have their anonymized clinical histories and radiographs entered into the three AI models. Not interact directly with any AI system; all evaluations will be performed by the research team. This study aims to understand how large language models perform under real-world clinical conditions and whether these systems may play a supportive role in endodontic diagnostics in the future.
Detailed description
This prospective observational study aims to evaluate the real-time diagnostic and treatment decision-making performance of three large language models-ChatGPT-4o, Gemini Advanced, and Claude 3.7-in an endodontic clinical setting. A total of 120 patients presenting to the endodontic clinic were examined, and detailed medical/dental histories, clinical findings, and periapical radiographs were collected. Each anonymized case was then presented to the three LLMs using a standardized prompt asking for the diagnosis and the appropriate treatment plan. All models were used in their default multimodal configurations without enabling web-search functions, plug-ins, or external data retrieval. Each question was submitted only once in isolated chat sessions to prevent memory carry-over. Responses were saved verbatim and compared with the reference diagnoses and treatment plans established by a panel of endodontic specialists. This study was designed to mimic real-world clinical conditions as closely as possible, providing a realistic assessment of how these systems might perform when used by clinicians in everyday practice. Understanding their capabilities and limitations in authentic clinical scenarios is essential, as LLMs are expected to play an increasingly vital role in future dental care particularly in decision support, triage, and patient education. By identifying where these models perform well and where they fall short, this research aims to inform safe and effective clinical integration as LLM technologies continue to advance.
Conditions
Interventions
| Type | Name | Description |
|---|---|---|
| DIAGNOSTIC_TEST | AI-Based Diagnostic Assessment | Participants' anonymized clinical information, including structured patient history and periapical radiographs, was used as input for three large language models (ChatGPT-4o, Gemini Advanced, Claude 3.7). The models were asked to determine the endodontic diagnosis and propose an appropriate treatment plan. No treatment, device, or drug was administered to participants. The intervention consists solely of AI-based interpretation of pre-existing clinical data. |
Timeline
- Start date
- 2025-07-07
- Primary completion
- 2025-08-05
- Completion
- 2025-10-03
- First posted
- 2025-12-15
- Last updated
- 2025-12-15
Locations
1 site across 1 country: Turkey (Türkiye)
Source: ClinicalTrials.gov record NCT07281066. Inclusion in this directory is not an endorsement.