According to recent study, when robots interact with people and show human-like emotions, people may mistakenly believe that they are capable of “thinking” or behaving in accordance with their own values and preferences rather than following instructions.
It is yet unclear how human-like behaviour, anthropomorphic form, and the propensity to assume autonomous cognition and deliberate action in robots are related, according to research author and primary investigator Agnieszka Wykowska, PhD, of the Italian Institute of Technology. It’s critical to comprehend how interacting with a robot that exhibits human-like behaviours might lead to a greater possibility of attributing purposeful agency to the robot as artificial intelligence permeates more and more aspects of our daily lives. The study was published in the journal Technology, Mind, and Behavior.
The iCub is a humanoid robot that was used in three trials with 119 participants to see how people would react to it after interacting with it and watching films together. Participants responded to questions showing photos of the robot in various scenarios and asked to select whether they believed each scenario’s purpose was mechanical or intentional before and after engaging with the robot. For instance, participants had to decide if a robot “grabbed the closest thing” or “was attracted by tool use” after viewing three photographs of the robot picking a tool.
In the first two studies, the researchers remotely directed iCub’s behaviour so it would act naturally, introducing itself, welcoming individuals, and asking for their names. Additionally, the robot’s eyes’ cameras may
The robot’s eyes have cameras that could detect the faces of the participants and keep eye contact. The participants next viewed three brief documentaries with the robot, which was designed to make sad, amazed, or happy noises and display the appropriate facial emotions.
In the third trial, the researchers had iCub view films alongside the subjects while being taught to act more like a machine. The robot could not sustain eye contact since the cameras in its eyes were turned off, and it talked to the participants only in recorded lines regarding the calibration procedure it was going through. Its torso, head, and neck would move repeatedly while emitting a “beep,” replacing any emotional responses to the films.
Also read: Anorexia associated with significant adverse pregnancy outcomes: Research
Researchers discovered that individuals who engaged with both the human-like robot and the machine-like robot were more likely to perceive the robot’s activities as purposeful as opposed to programmed. This demonstrates that people do not automatically assume that a robot that resembles a human being has the ability to think and feel. It’s possible that human-like conduct is essential for being taken seriously as an intentional agent.
These results, in Wykowska’s opinion, suggest that when artificial intelligence gives the appearance of mimicking human behaviour, people may be more likely to assume that it is capable of autonomous thought. She suggested that this may influence the design of social robots in the future.
“Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.”
Follow Medically Speaking on Twitter Instagram Facebook