Trials / Not Yet Recruiting
Not Yet RecruitingNCT07488962
Building Cognitive Resilience to Vaccine Misinformation Using AI: Evidence From a Randomised Trial
Building Cognitive Resilience to Vaccine Misinformation Using Conversational AI: Evidence From a UK Randomised Trial
- Status
- Not Yet Recruiting
- Phase
- N/A
- Study type
- Interventional
- Enrollment
- 1,000 (estimated)
- Sponsor
- London School of Hygiene and Tropical Medicine · Academic / Other
- Sex
- All
- Age
- 20 Years
- Healthy volunteers
- Accepted
Summary
This study aims to understand how parents and caregivers in the United Kingdom engage with information about childhood vaccination (routine vaccines for children and adolescents, excluding tetanus or international travel-related vaccines) and how tailored digital health tools can help address childhood vaccine misinformation.
Detailed description
Global evidence shows that harmful and misleading information spreads rapidly online and can undermine trust in public health guidance. The World Health Organization has described this challenge as an "infodemic." In the United Kingdom (UK), most people obtain news through online platforms, and false information travels faster and further than accurate content. Vaccination is among the areas most affected. Many UK parents report encountering anti- vaccine claims online, and research shows that such exposure is linked to reduced vaccine confidence and lower uptake. These concerns arise at a time when routine childhood vaccination rates in the UK have declined below WHO targets, contributing to renewed outbreaks of preventable diseases such as measles. Studies also show that simply providing more factual information is often insufficient to counter misinformation. Cognitive biases - such as confirmation bias, emotional reasoning, and low perceived risk - shape how people interpret health information. Systematic reviews suggest that pre-emptive approaches based on inoculation theory ("prebunking"), which warn people about common manipulation tactics and provide weakened examples of misinformation, can strengthen their ability to recognise and resist false claims. At the same time, advances in artificial intelligence have created opportunities to deliver personalised, interactive health communication at scale. Emerging evidence indicates that brief conversations with AI-enabled, vaccine-focused chatbots can improve rumour recognition, encourage informed decision-making, and reduce belief in false narratives by providing personalised, interactive, and accessible information. Building on this evidence, this project will test whether an AI-based chatbot can help parents identify misleading claims about childhood vaccinations and increase their confidence in making childhood vaccination decisions. This study aims to evaluate whether an AI-driven chatbot, MindShield, can strengthen resilience to vaccine misinformation by directly engaging the cognitive biases - such as confirmation bias, affective reasoning, and optimism bias - that shape vaccine risk perception and decision-making. We will first identify bias patterns underlying misinformation beliefs among parents in the UK. MindShield, grounded in inoculation theory, will then be evaluated in a randomised controlled trial to test whether short, bias-aware conversations improve bias recognition, misinformation discernment, and vaccine confidence compared with factual information alone. Finally, we will assess the scalability, acceptability, and ethical considerations of bias-targeted AI interventions for broader misinformation contexts. The study asks whether conversational AI can act as a cognitive safeguard, helping individuals recognise and resist manipulative narratives while supporting informed, confident health decisions. We will conduct a randomised controlled trial with 1,000 parents or caregivers of children under 18 in the UK. Eligible participants will be recruited online and randomly assigned to either: 1. Intervention group: a brief interaction with MindShield, an AI-based chatbot introducing three common misinformation tactics-logical fallacies, emotional manipulation, and risk-perception biases-through short explanations and interactive examples; or 2. Control group: a "myth versus fact" infographic adapted from official public health communication materials presenting evidence-based information on the same topics All participants will complete baseline and immediate post-intervention questionnaires. The primary outcome is parents' ability to correctly distinguish true from false childhood vaccine statements. Secondary outcomes include vaccine confidence, willingness to vaccinate, perceived risks, self-efficacy, and perceptions of AI. Participants in the intervention group will also assess the chatbot's acceptability and usability. Quantitative data will be analysed using mixed-effects models following intention-to-treat principles. The findings will help determine whether an AI-based, vaccine-focused chatbot can strengthen resilience to misinformation and improve informed decision-making around childhood vaccination. A subsequent scale-up to additional countries is planned, intended to evaluate cross-cultural generalizability.
Conditions
Interventions
| Type | Name | Description |
|---|---|---|
| BEHAVIORAL | AI-driven chatbot | A tailored AI-driven chatbot designed to counter vaccine misinformation. |
| BEHAVIORAL | Social media infographic | A UNICEF social media infographic with three "myth vs. fact" statements on vaccination. |
Timeline
- Start date
- 2026-03-01
- Primary completion
- 2026-03-01
- Completion
- 2026-03-01
- First posted
- 2026-03-23
- Last updated
- 2026-03-23
Source: ClinicalTrials.gov record NCT07488962. Inclusion in this directory is not an endorsement.