Clinical Trials Directory

Trials / Completed

CompletedNCT07526441

Use and Acceptance of Large Language Models for Cancer Shared Decision-Making

Use and Acceptance of Large Language Models in Oncological Shared Decision-Making Among Patients, the Public, and Healthcare Professionals

Status
Completed
Phase
Study type
Observational
Enrollment
7,151 (actual)
Sponsor
Technical University of Munich · Academic / Other
Sex
All
Age
18 Years
Healthy volunteers
Accepted

Summary

This study examines how cancer patients, the general public, and healthcare professionals use and perceive large language models (such as ChatGPT) for health-related shared decision-making in oncology. A cross-sectional survey was conducted among 7,151 participants across 30 countries using a questionnaire developed and validated through a two-round Delphi process involving 44 experts. The study assessed current patterns of large language model use for health information, barriers to adoption including concerns about reliability and privacy, future expectations regarding these tools in shared decision-making, and demographic predictors of adoption. Participants were recruited through the Prolific platform between March and May 2025, with stratified sampling across three groups: cancer patients diagnosed within the past five years, general population members from the United States and United Kingdom, and licensed healthcare professionals with active patient contact.

Detailed description

Shared decision-making is a collaborative process in which clinicians support patients in reaching treatment decisions. Despite its importance in oncology, structured shared decision-making remains uncommon in routine clinical practice. Large language models offer a new way for patients to access and understand medical information, yet little is known about how key stakeholders perceive and use these tools for health decisions. This observational study used a sequential mixed-methods design combining Delphi consensus methodology with cross-sectional survey deployment. A 44-expert panel across eight domains (clinical artificial intelligence, technical development, oncology, psychology, epidemiology, patient advocacy, ethics, and legal expertise) developed and validated the assessment instrument through two Delphi rounds, achieving consensus on 89 items. The final instrument contained 52 quantitative items and 8 qualitative prompts, distinguishing between general and healthcare-specific large language model use. The study recruited three cohorts: 2,316 cancer patients with self-reported diagnosis within five years, 2,000 general population members from the United States and United Kingdom, and 2,835 licensed healthcare professionals. Quality control included attention checks, completion time monitoring, consistency validation, and verification procedures, resulting in exclusion of 694 responses (8.8%) from an initial 7,845. Primary analyses included chi-squared testing and ANOVA with Bonferroni correction, multivariable logistic regression with hierarchical model building to identify adoption predictors, and user segmentation through cross-tabulation combined with k-means clustering. The study was approved by the institutional review board of the Technical University of Munich (TUM2024-89-S-SB).

Conditions

Timeline

Start date
2025-03-01
Primary completion
2025-05-01
Completion
2025-05-01
First posted
2026-04-13
Last updated
2026-04-13

Locations

1 site across 1 country: Germany

Source: ClinicalTrials.gov record NCT07526441. Inclusion in this directory is not an endorsement.