Trials / Not Yet Recruiting
Not Yet RecruitingNCT07022769
Testing an AI Large Language Model Tool for Cognitive Debiasing in Musculoskeletal Care
Comparison of a Large Language Model (LLM)-Facilitated Cognitive Debiasing Strategy Versus LLM-Generated Diagnostic Feedback Alone in Musculoskeletal Specialty Care: A Randomized Controlled Trial
- Status
- Not Yet Recruiting
- Phase
- N/A
- Study type
- Interventional
- Enrollment
- 150 (estimated)
- Sponsor
- University of Texas at Austin · Academic / Other
- Sex
- All
- Age
- 18 Years
- Healthy volunteers
- Not accepted
Summary
The goal of this clinical trial is to find out whether using an artificial intelligence (AI) tool called a Large Language Model (LLM) can help patients think more clearly about their symptoms and improve their trust and experience during a visit to a musculoskeletal specialist. The study will answer two main questions: 1. Does an LLM-guided checklist that encourages patients to reflect on their beliefs about their symptoms improve their trust in the clinician (measured using the TRECS-7 scale)? 2. Does the checklist improve how patients feel about their consultation overall? Participants will be randomly assigned to one of two groups: * One group will receive an LLM-guided checklist that helps them think more flexibly about their condition. * The other group will receive an LLM-generated likely diagnosis and brief explanation of their symptoms. In both groups, the information from the AI tool will be shared with both the patient and the clinician before the consultation. Patients in the debiasing (intervention) group will: * Complete a short set of questions with help from a researcher * Receive a simple summary from the AI that reflects their beliefs and gently challenges any unhelpful thinking * Attend their regular specialist appointment * Complete a short survey afterwards capturing their thoughts, experience and basic demographics Patients in the diagnosis-only (control) group will: * Describe their symptoms to the AI LLM * Receive a likely diagnosis and short explanation based on this description * Attend their regular specialist appointment * Complete a short survey afterwards capturing their thoughts, experience and basic demographics
Detailed description
A patient's experience of physical discomfort and incapability is closely tied to how they interpret bodily sensations. The human mind is a meaning-making system that rapidly forms stories and assumptions about internal experiences. When individuals experience musculoskeletal pain or dysfunction, their initial interpretations often fall into broad cognitive categories: (1) harm that requires rest and protection; (2) threat to valued roles and activities; or (3) the belief that symptom elimination is the sole path to recovery. These automatic, unconscious interpretations can be adaptive in acute or dangerous situations, but they may also lead to biased or inaccurate symptom appraisals. When misaligned with the underlying pathology, such heuristics can exacerbate emotional distress, delay accurate diagnosis, and drive unnecessary investigations or treatments. The challenge, therefore, lies in supporting patients to reframe these beliefs and engage with their symptoms more adaptively. Cognitive debiasing strategies have emerged as a promising approach to address this concern. These strategies aim to slow down automatic thinking, challenge entrenched assumptions, and promote more flexible, reflective, and value-aligned reasoning. By encouraging a more nuanced understanding of bodily signals, cognitive debiasing may improve the quality of clinical decisions and overall patient experience-offering advantages over traditional educational or informational tools. Recent advances in Artificial Intelligence (AI), particularly the rise of Large Language Models (LLMs), have opened new possibilities for enhancing cognitive debiasing interventions. LLMs such as ChatGPT can analyze and synthesize patient-reported symptoms and beliefs to generate supportive, plain-language summaries of their thinking. This process may help patients recognize their own interpretive patterns and consider alternative, less distressing explanations for their symptoms. In parallel, LLMs can assist clinicians by flagging potentially unhelpful or distorted beliefs prior to a consultation, allowing for more tailored and empathic communication. This trial tests whether a structured, LLM-facilitated debiasing intervention can better support accurate symptom appraisal and enhance the clinical encounter, compared to LLM-generated diagnosis alone. This work builds on the recognition that there is wide variation in musculoskeletal care experience and decision-making, with existing tools such as decision aids and question prompt lists often falling short in challenging rigid or unhelpful thinking patterns.
Conditions
Interventions
| Type | Name | Description |
|---|---|---|
| BEHAVIORAL | LLM-facilitated cognitive debiasing aid | As part of the intervention, patients first respond to a series of questions about their beliefs regarding their symptoms (e.g., "What's usually behind these symptoms?"), with responses transcribed verbatim via tablet. These responses are input into a Large Language Model (LLM), which generates a brief, supportive summary of the patient's beliefs, shared back with the patient to encourage self-awareness and reflection. Patients are then invited to consider prompts such as, "What emotions or circumstances might be influencing your thinking?" with their reflections again transcribed. The LLM analyzes these reflections to identify potential signs of emotional distress or maladaptive beliefs, and this output is again provided to the patient. The LLM summary of identified maladaptive beliefs is then also shown to the clinician ahead of the consultation to support more tailored, empathetic communication. |
Timeline
- Start date
- 2025-06-23
- Primary completion
- 2025-12-31
- Completion
- 2025-12-31
- First posted
- 2025-06-15
- Last updated
- 2025-06-26
Locations
1 site across 1 country: United States
Source: ClinicalTrials.gov record NCT07022769. Inclusion in this directory is not an endorsement.