Trials / Completed
CompletedNCT04474691
staRt: Enhancing Speech Treatment With Smartphone-delivered Biofeedback
- Status
- Completed
- Phase
- N/A
- Study type
- Interventional
- Enrollment
- 15 (actual)
- Sponsor
- New York University · Academic / Other
- Sex
- All
- Age
- 8 Years – 15 Years
- Healthy volunteers
- Not accepted
Summary
Previous research suggests that biofeedback can outperform traditional interventions for RSE, but no controlled studies have tested this hypothesis in the context of app-delivered biofeedback. The objective of this aim is to use the staRt app to test our working hypothesis that speakers will make larger gains in /r/ accuracy when app-based treatment incorporates biofeedback, compared to a non-biofeedback condition. With a network of cooperating SLPs, this project will recruit 15 children with /r/ misarticulation to receive 8 weeks of intervention using staRt. Individual sessions will be randomly assigned to include or exclude the visual biofeedback display. Randomization tests will be used to evaluate, for each individual, whether larger increments of change are associated with biofeedback and non-biofeedback sessions.
Conditions
Interventions
| Type | Name | Description |
|---|---|---|
| BEHAVIORAL | Traditional articulation treatment | Traditional articulation treatment involves providing auditory models and verbal descriptions of correct articulator placement, then cueing repetitive motor practice. Images and diagrams of the vocal tract can be used as visual aids; however, no real-time visual display of articulatory or acoustic information will be made available. Knowledge of performance feedback could describe either the desired articulator placement or the auditory quality of the target sound. |
| BEHAVIORAL | Visual-acoustic biofeedback | In visual-acoustic biofeedback treatment, the elements of traditional treatment (auditory models and verbal descriptions of articulator placement) are enhanced with a dynamic display of the speech signal in the form of the real-time LPC (Linear Predictive Coding) spectrum generated by the staRt app. Because correct vs incorrect productions of /r/ contrast acoustically in the frequency of the third formant (F3), participants will be cued to make their real-time LPC spectrum match a visual target characterized by a low F3 frequency. They will be encouraged to attend to the visual display while adjusting the placement of their articulators and observing how those adjustments impact F3. Knowledge of performance feedback will typically involve reference to the location of the third peak or 'bump' on the visual display. |
Timeline
- Start date
- 2018-04-04
- Primary completion
- 2019-08-31
- Completion
- 2022-02-28
- First posted
- 2020-07-17
- Last updated
- 2023-05-03
- Results posted
- 2023-05-03
Locations
1 site across 1 country: United States
Source: ClinicalTrials.gov record NCT04474691. Inclusion in this directory is not an endorsement.