New Study has shown robots can encourage people to take greater chances in a simulated gambling scenario than they would if there was nothing to influence their behaviors. Increasing our understanding of whether robots can impact risk-taking might have clear ethical, practical and policy implications, which this research set out to explore.
Dr Yaniv Hanoch, Associate Professor in Risk Management in the University of Southampton who led the study explained, “We know that peer pressure can lead to higher risk-taking behavior. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students carrying the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a computer keyboard to inflate a balloon displayed on the monitor. With each press of the spacebar, the balloon inflates slightly, and 1 penny is inserted into the participant’s “temporary money bank”. The balloons can burst randomly, meaning the player loses any money they have won for that balloon plus they have the choice to “cash-in” earlier this happens and move on to the next balloon.
One-third of those participants took the test in an area by themselves (the control group), one third shot the evaluation along with a robot that just provided them with the instructions but was quiet the rest of the period and the closing, the experimental team, took the exam together with the robot encouraging risk-taking in addition to speaking encouraging comments such as “why did you stop pumping?”
The results showed that the team who were encouraged into risk-taking by the robot took more risks, blowing up their balloons significantly more often than those from the other groups did. Additionally they earned more money overall. There was no substantial difference in the behaviors of the students accompanied by the silent robot and people that have no robot.
Dr Hanoch explained: “We saw participants in the control condition scale back their risk-taking behavior following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
The researcher now believe that additional studies are required to see whether similar risk-taking results will emerge from human interaction with other artificial intelligence (AI) systems, for example digital assistants or on-screen avatars.
Dr Hanoch stated, “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”
Related Journal Article: https://www.liebertpub.com/doi/10.1089/cyber.2020.0148