Robotic manipulators show promise is assisting people with upper limb impairments perform actvities of daily living, but manual teleoperation of a robotic arm is challenging. Shared control algorithms have been developed to reduce the burden of the user, but current works do not adapt to the user and may fail to predict the human intent. This may result in robot policies that oppose the human’s control inputs.
This project aims to develop a framework for robots to learn a human’s preference and assist the human better through human feedback.
To adapt to the human, the robot needs to know the human intent and preference. As state-of-the-art shared-control algorithms cannot obtain the human intent accurately from passive observation, we first developed an information-gathering module to ask user questions when there is high uncertainty about their intent.
After obtaining the human intent, the robot may still fail to perform the task to the human’s preference. We hypothesise that the robot can learn the human preferences from the human’s interactions with the robot and adapt its behaviour. Our next step aims to extract human preferences from human-robot interactions and explore which interaction mode is the most intuitive.