A new study published in Frontiers in Human Neuroscience suggests that assistive robots may work best when they share control with their users, striking a middle-ground between full automation and manual operation.
For people living with severe motor impairments such as from the disease amyotrophic lateral sclerosis (ALS), everyday tasks like cooking, eating, or moving objects often require constant assistance from caregivers. While physically assistive robots have the potential to restore independence, many existing systems are limited to simple, pre-programmed tasks.
Brain-robot interfaces, which allow users to control robots using brain signals, offer a promising alternative—but they are often noisy, slow, and difficult to use without help from the robot itself.
Led by Hannah Douglas, researchers at Araya Inc. in Tokyo set out to design a system that could overcome these challenges.
Their goal was to create a shared realistic virtual kitchen environment where two users could work alongside two mobile robots to complete tasks. The users would use a combination of brain signals via electroencephalography (EEG), muscle signals via electromyography (EMG), and eye-tracking to direct the robots’ actions.
The system needed to be flexible enough to handle a wide range of daily-living tasks, from picking up dishes to moving pots and pans, while still giving users a meaningful sense of control.
To investigate this, the team developed three different “levels of autonomy” for the robots.
In the first level, Assisted Teleoperation, users controlled nearly every step—selecting objects, choosing actions, and navigating the robot through the kitchen. “At this level, the robot acts primarily as an executor of detailed instructions,” the authors explained.
In the second level, Shared Autonomy, users still chose what they wanted the robot to do, but the robot handled navigation and some of the finer details. “Users simply select a landmark with the eye tracker, and the robot moves autonomously, allowing them to focus on higher-level decisions.”
In the third level, Full Automation, users simply selected a high-level goal, such as choosing a food item, and the robot completed the entire sequence on its own. “In this condition, user input is minimal, focusing on high-level goal selection rather than stepwise control.”
Thirty healthy adults (9 females, aged 31 years on average) participated in a controlled study to compare the three modes. Although the participants were not individuals with disabilities, the researchers used this group to test the system’s usability and performance before moving to clinical populations.
The results showed clear differences between the autonomy levels. Full Automation was the easiest for participants to use, requiring the least mental effort and completing tasks the fastest. Participants rated it highest for usability and lowest for workload. However, this convenience came at a cost: users felt less in control of the robot’s actions.
Assisted Teleoperation, by contrast, was the most demanding. Participants had to manage navigation, object selection, and action commands, leading to higher workload and lower performance. Many found it tiring and difficult to use.
Shared Autonomy offered a middle ground. In fact, it achieved a higher task success rate than Full Automation (80% compared to 66.7%) while preserving a stronger sense of agency. Maintaining independence and personal control is especially important in assistive technology, as it can empower individuals with severe motor impairments. The researchers found that Shared Autonomy was more reliable when EEG signals were noisy—a common issue in non-invasive brain-computer systems—because it utilized highly accurate eye-tracking to offset the potential for catastrophic robot errors.
“Therefore, while Full Automation is the optimal solution for efficiency, Shared Autonomy represents a valuable alternative for users who prioritize reliability and individuality,” Douglas and colleagues concluded.
The study has limitations. For example, all participants were healthy adults, meaning the results may not fully reflect the needs of people with ALS or other motor impairments.
The study, “Levels of shared autonomy in brain-robot interfaces: enabling multi-robot multi-human collaboration for activities of daily living,” was authored by Hannah Douglas, Marina Di Vincenzo, Rousslan Fernand Julien Dossa, Luca Nunziante, Shivakanth Sujit, and Kai Arulkumaran.
Leave a comment
You must be logged in to post a comment.