Clinical Trials Directory

Trials / Completed

CompletedNCT04359108

Environmental Localization Mapping and Guidance for Visual Prosthesis Users

Status
Completed
Phase
N/A
Study type
Interventional
Enrollment
26 (actual)
Sponsor
Johns Hopkins University · Academic / Other
Sex
All
Age
18 Years
Healthy volunteers
Accepted

Summary

This study is driven by the hypothesis that independent navigation by blind users of visual prosthetic devices can be greatly aided by use of an autonomous navigational aid that provides information about the environment and guidance for navigation through multimodal sensory cues. For this study, the investigators developed a navigation system that uses on-board sensing to map the user's environment and compute navigable paths to desired destinations in real-time. Information regarding obstacles and directional guidance is communicated to the user via a combination of sensory modalities including limited vision (through the user's visual prosthesis), haptic, and audio cues. This study evaluates how effectively this navigational aid improves prosthetic vision users' ability to perform navigational tasks. The participants for this study include both retinal prosthesis users of the Argus II Retinal Prosthesis System (Argus II) and normally sighted individuals who use a virtual reality headset to simulate the limited vision of the Argus II system.

Detailed description

About 1.3 million Americans aged 40 and older are legally blind, a majority because of diseases with onset later in life, such as glaucoma and age-related macular degeneration. Second Sight Medical Products (SSMP) has developed the world's first FDA approved retinal implant, Argus II, intended to restore some functional vision for people suffering from retinitis pigmentosa (RP). In this era of smart devices, generic navigation technology, such as GPS mapping apps for smartphones, can provide directions to help guide a blind user from point A to point B. However, these navigational aids do little to enable blind users to form an egocentric understanding of the surroundings, are not suited to navigation indoors, and do nothing to assist in avoiding obstacles to mobility. The Argus II, on the other hand, provides blind users with a limited visual representation of the users surroundings that improves users' ability to orient themselves and traverse obstacles, yet lacks features for high-level navigation and semantic interpretation of the surroundings. The proposed study aims to address these limitations of the Argus II through a synergy of state-of-the-art simultaneous localization and mapping (SLAM) and scene recognition technologies. This study is driven by the hypothesis that independent navigation by blind users of visual prosthetic devices can be greatly aided by use of an autonomous navigational aid that provides information about the environment and guidance for navigation through multimodal sensory cues. The investigators developed a navigation system that uses on-board sensing and SLAM-based algorithms to continuously construct a map of the user's environment and locate the user within that map in real-time. On-board path planning algorithms compute optimal navigation routes to reach desired destinations based on the constructed map. The system then communicates obstacle locations and navigational cues to the user while navigating via a combination of sensory modalities. The participants for this study include blind Argus II users, who use their retinal implant for vision, and normally sighted individuals, who use a virtual reality headset to simulate the limited vision of a retinal prosthesis. The sensory modalities used by the navigational aid to communicate information back to the user include: * Limited vision is provided via the user's visual prosthesis, with the Argus II retinal implant supporting an image size of 10 x 6 pixels spanning approximately 18 x 11 degrees field-of-view. Images sent to the visual implant are derived from video frames provided by forward-facing cameras integrated within headgear worn by the user. Three forms of vision feedback are evaluated in this study including: 1) the standard vision output of the Argus II (which uses texture-based processing including difference-of-Gaussian and contrast enhancement filters), 2) an enhanced depth-based vision mode that uses the depth sensing capabilities of the navigational aid to highlight above-ground obstacles, and 3) a high field of view depth-based vision mode that doubles the pixel size and field of view of the visual feedback in each dimension. The depth-vision modes display only above-ground obstacles with increasing brightness relative to decreasing distance from the user; the ground and obstacles beyond a threshold distance are not displayed in order to declutter the visual scene. The high field-of-view depth-vision mode is only utilized by normally sighted participants, as this mode exceeds the vision capabilities of the Argus II implant. * Haptic cues indicate the direction in which the user should advance in order to follow the path computed by the navigational aid to reach a target destination. The haptic cues are generated by vibrators situated at five positions located left-to-right along the users forehead, which are built into user-worn headgear. The five vibration points indicate for the user to turn in directions: "far left", "slight left", "straight ahead", "slight right", and "far right". * Audio cues provide an audible alert when the user approaches an obstacle within 1.5 feet and provide verbal updates on the remaining distance to reach the destination along the path computed by the navigational aid. This study compares participants' performance in completing navigation tasks using five different modes and combinations of the foregoing sensory modalities as follows: 1) Argus vision, 2) depth vision, 3) depth vision with haptic and audio, 4) haptic and audio (without vision), and 5) high field-of-view depth vision. The navigation tasks performed by the participants using these modalities include navigating through a dense obstacle field and navigating between rooms within an indoor facility that requires successful traversal of non-trivial paths. In addition, a third experiment evaluates the effect of resolution and field-of-view of the retinal implant upon participants' ability to visually discern relative distances to different obstacles based on optical flow patterns induced by the participant's motion when approaching obstacles situated at different distances ahead of the user. For this experiment, the following four vision settings are evaluated: 1) low resolution / low field-of-view, 2) low resolution / high field-of-view, 3) high resolution / low field-of-view, and 4) high resolution / high field-of-view. The "low" settings correspond to the values of the Argus II system, whereas the "high" settings corresponding to a doubling of the "low" values along each dimension. For Argus user participants, only the low resolution / low field-of-view setting is evaluated since the Argus II retinal implant is incapable of supporting the higher vision settings.

Conditions

Interventions

TypeNameDescription
DEVICENavigation system mode: Argus VisionThis intervention uses the navigational aid with the output sensory modalities configured as follows: * Vision: Argus mode * Haptics: none * Audio: none Since this mode only provides the standard Argus vision, it is equivalent to using the base Argus II system without the navigational aid.
DEVICENavigation system mode: Depth VisionThis intervention uses the navigational aid with the output sensory modalities configured as follows: * Vision: Depth mode (at Argus II resolution and field-of-view) * Haptics: none * Audio: none
DEVICENavigation system mode: Depth Vision with Haptic / AudioThis intervention uses the navigational aid with the output sensory modalities configured as follows: * Vision: Depth mode (at Argus II resolution and field-of-view) * Haptics: yes * Audio: yes
DEVICENavigation system mode: Haptic / AudioThis intervention uses the navigational aid with the output sensory modalities configured as follows: * Vision: none * Haptics: yes * Audio: yes Since this mode does not provide any visual feedback, it is equivalent to using navigational aid completely blind without a visual prosthesis.
DEVICENavigation system mode: High Field-of-View Depth VisionThis intervention uses the navigational aid with the output sensory modalities configured as follows: * Vision: High Field-of-View Depth mode (at twice the Argus II resolution and field-of-view along each dimension) * Haptics: none * Audio: none
DEVICEDistance test vision mode: Low Resolution / Low Field-of-ViewThis intervention performs the relative distance to two obstacles test with the user's vision output configured as follows: * Resolution: Low * Field-of-View: Low
DEVICEDistance test vision mode: Low Resolution / High Field-of-ViewThis intervention performs the relative distance to two obstacles test with the user's vision output configured as follows: * Resolution: Low * Field-of-View: High
DEVICEDistance test vision mode: High Resolution / Low Field-of-ViewThis intervention performs the relative distance to two obstacles test with the user's vision output configured as follows: * Resolution: High * Field-of-View: Low
DEVICEDistance test vision mode: High Resolution / High Field-of-ViewThis intervention performs the relative distance to two obstacles test with the user's vision output configured as follows: * Resolution: High * Field-of-View: High

Timeline

Start date
2021-03-16
Primary completion
2024-09-30
Completion
2024-09-30
First posted
2020-04-24
Last updated
2025-03-10
Results posted
2025-03-10

Locations

2 sites across 1 country: United States

Regulatory

Source: ClinicalTrials.gov record NCT04359108. Inclusion in this directory is not an endorsement.